콘텐츠 기본 건너뛰기

이 페이지의 일부는 기계 또는 AI 번역될 수 있습니다.

Wallaroo.AI: Techniques for Faster, Easier AI

Microsoft Reactor에 가입하고 스타트업 및 개발자와 실시간 소통

AI를 시작할 준비가 되셨나요? Microsoft Reactor는 스타트업, 기업가 및 개발자가 AI 기술에 대한 다음 비즈니스를 구축할 수 있도록 이벤트, 교육 및 커뮤니티 리소스를 제공합니다. 참여해 주세요!

Wallaroo.AI: Techniques for Faster, Easier AI

Microsoft Reactor에 가입하고 스타트업 및 개발자와 실시간 소통

AI를 시작할 준비가 되셨나요? Microsoft Reactor는 스타트업, 기업가 및 개발자가 AI 기술에 대한 다음 비즈니스를 구축할 수 있도록 이벤트, 교육 및 커뮤니티 리소스를 제공합니다. 참여해 주세요!

돌아가기

Wallaroo.AI: Techniques for Faster, Easier AI

  • 형식:
  • alt##LivestreamLivestream

항목: AI를 위한 인프라, AI, 데이터 과학 및 Machine Learning

언어: 영어

  • 이 시리즈의 이벤트:
  • 7

Techniques for Faster, Easier AI: Model Observability and Workload Orchestration

This series uses a uses a cashierless checkout scenario to illustrate how to overcome challenges in monitoring CV AI models deployed at the edge and deploying and automating multi-cloud workloads

Ideal for ML Engineers, Data Scientists, or AI developers, these sessions will help you build the skills and understand the processes needed to easily deploy, scale, and manage AI workloads across emergent AI use cases.

The sessions will show:

● Model Observability and Optimization - We will examine how to easily deploy, and monitor for drift and accuracy and take action on models running on remote edge devices such as in store cameras and checkouts.

● Workload Orchestration - We will build on the retail industry example to support accurate demand forecasting across in-store products to help inform product inventory supplies and distribution. We will demonstrate how to use orchestration capabilities to drive automation of AI workloads that involve ingestion and running of large batches of multimodal datasets for forecast generation.

Hands On:

Practice the techniques shown in the sessions by downloading the Wallaroo.AI Inference Server Free Edition from the Azure Marketplace (https://aka.ms/Wallaroo.AI-Free)

스피커

이 시리즈의 과거 이벤트

의 모든 시간 - 협정 세계시

11월

09

목요일

2023

Deploying and Managing Computer Vision Models at the Edge

7:00 오후 - 8:00 오후 (UTC)

In this session led by Steve Notley from the Field Engineering team at Wallaroo.AI you will learn how to deploy, serve, optimize and observe Computer Vision models in production both in the cloud and at the Edge.

  • 형식:
  • alt##LivestreamLivestream

항목:

언어: 영어

주문형 보기

1월

09

화요일

2024

ML Model Insights and Observability at the Edge

9:00 오후 - 10:00 오후 (UTC)

Learn how to get instant data insights and drift detection alerts from pipelines deployed at edge locations without any operational overhead. See how you to create aggregated drift detection assays on inputs and outputs for the pipelines deployed in an ML operations center & all the edge deployments Why should I attend? You will leave this session with a comprehensive understanding about how to observe ML models deployed at the edge for data drift and take corrective action for under performing models. Try this Computer Vision model and other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace (https://portal.azure.com) and also try the Free Wallaroo.AI Community Edition (https://portal.wallaroo.community/)

  • 형식:
  • alt##LivestreamLivestream

항목: AI를 위한 인프라

언어: 영어

주문형 보기

1월

23

화요일

2024

ML Workflow Automation

9:00 오후 - 10:00 오후 (UTC)

About this session: Learn how to automate and scale ML workflows with workload orchestration for both batch and real-time inference serving. In this session we will dive into the steps to deploy, automate and scale recurring production ML workloads that can ingest data from predefined data sources to run inferences, chain pipelines, and send inference results to predefined destinations to analyze model insights and assess business outcomes. Why should I attend? You will leave this session with a comprehensive understanding about how to manage and orchestrate model automation for batch and real-time inference serving scenarios Resources: Try this Computer Vision model and other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace (https://portal.azure.com) and also try the Free Wallaroo.AI Community Edition (https://portal.wallaroo.community/)

  • 형식:
  • alt##LivestreamLivestream

항목: AI를 위한 인프라

언어: 영어

주문형 보기

2월

28

수요일

2024

Getting Your AI Models To The Production Start Line

6:00 오후 - 7:00 오후 (UTC)

Getting AI models into production is hard. To begin with there are many different model building frameworks on the market to choose from and each comes with a unique way to package them for production adding to the complexity for getting to the production start line. In this session we will cover getting the model framework of your choice to production in a standardized way as well as using ML model registry for version control as the models move between training, production, monitoring, and deployment.

  • 형식:
  • alt##LivestreamLivestream

항목: AI

언어: 영어

주문형 보기

5월

16

목요일

2024

Beyond Edge AI Deployment: Manage, Observe, Update

4:00 오후 - 5:00 오후 (UTC)

Congratulations! You have deployed your AI models to the Edge. How do you make sure they are performing the way they were intended to? If they are not, what can you do about it? In this session we will deep dive into capturing observability data on Edge deployments even when the network connection is intermittent or has limited bandwidth and have the ability to return the data during specific time periods to run observability on this data. We will show how Data scientists in the Model Operations Center can monitor drift for models deployed for a specific edge location or a group of edge locations and take action on underperforming models by hot swapping in better performing models. Try this Computer Vision model and other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace (https://aka.ms/Wallaroo-Inference) and also try the Free Wallaroo.AI Community Edition (https://aka.ms/Wallaroo.AI-Free)

  • 형식:
  • alt##LivestreamLivestream

항목: 데이터 과학 및 Machine Learning

언어: 영어

주문형 보기

7월

16

화요일

2024

Deploying and Monitoring LLM Inference Endpoints

6:00 오후 - 7:00 오후 (UTC)

In this session we will dive into deploying LLMs to Production Inference Endpoints and then putting in place automated monitoring metrics and alerts to help track model performance and suppress potential output issues such as toxicity. We will also cover the process of optimizing LLMs using RAG, for relevant, accurate, and useful outputs. You will leave this session with a comprehensive understanding about deploying LLMs to production and monitoring the models for issues such as Toxicity, relevance, and accuracy. Try this other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace and also try the Free Wallaroo.AI Community Edition

  • 형식:
  • alt##LivestreamLivestream

항목: 데이터 과학 및 Machine Learning

언어: 영어

주문형 보기

10월

31

목요일

2024

Building Custom LLMs for Production Inference Endpoints - Wallaroo.ai

6:00 오후 - 7:00 오후 (UTC)

In this session we will dive into the details for how to build, deploy, and optimize custom Large Language Models (LLMs) for production inference environments This session will cover the key steps for Custom LLMs (LLama), focusing on: Why custom LLM's? Inference Performance Optimization Harmful language Detection

  • 형식:
  • alt##LivestreamLivestream

항목: 데이터 과학 및 Machine Learning

언어: 영어

주문형 보기

이 시리즈의 과거 이벤트

의 모든 시간 - 협정 세계시

11월

09

목요일

2023

Deploying and Managing Computer Vision Models at the Edge

7:00 오후 - 8:00 오후 (UTC)

In this session led by Steve Notley from the Field Engineering team at Wallaroo.AI you will learn how to deploy, serve, optimize and observe Computer Vision models in production both in the cloud and at the Edge.

  • 형식:
  • alt##LivestreamLivestream

항목:

언어: 영어

주문형 보기

1월

09

화요일

2024

ML Model Insights and Observability at the Edge

9:00 오후 - 10:00 오후 (UTC)

Learn how to get instant data insights and drift detection alerts from pipelines deployed at edge locations without any operational overhead. See how you to create aggregated drift detection assays on inputs and outputs for the pipelines deployed in an ML operations center & all the edge deployments Why should I attend? You will leave this session with a comprehensive understanding about how to observe ML models deployed at the edge for data drift and take corrective action for under performing models. Try this Computer Vision model and other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace (https://portal.azure.com) and also try the Free Wallaroo.AI Community Edition (https://portal.wallaroo.community/)

  • 형식:
  • alt##LivestreamLivestream

항목: AI를 위한 인프라

언어: 영어

주문형 보기

1월

23

화요일

2024

ML Workflow Automation

9:00 오후 - 10:00 오후 (UTC)

About this session: Learn how to automate and scale ML workflows with workload orchestration for both batch and real-time inference serving. In this session we will dive into the steps to deploy, automate and scale recurring production ML workloads that can ingest data from predefined data sources to run inferences, chain pipelines, and send inference results to predefined destinations to analyze model insights and assess business outcomes. Why should I attend? You will leave this session with a comprehensive understanding about how to manage and orchestrate model automation for batch and real-time inference serving scenarios Resources: Try this Computer Vision model and other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace (https://portal.azure.com) and also try the Free Wallaroo.AI Community Edition (https://portal.wallaroo.community/)

  • 형식:
  • alt##LivestreamLivestream

항목: AI를 위한 인프라

언어: 영어

주문형 보기

2월

28

수요일

2024

Getting Your AI Models To The Production Start Line

6:00 오후 - 7:00 오후 (UTC)

Getting AI models into production is hard. To begin with there are many different model building frameworks on the market to choose from and each comes with a unique way to package them for production adding to the complexity for getting to the production start line. In this session we will cover getting the model framework of your choice to production in a standardized way as well as using ML model registry for version control as the models move between training, production, monitoring, and deployment.

  • 형식:
  • alt##LivestreamLivestream

항목: AI

언어: 영어

주문형 보기

5월

16

목요일

2024

Beyond Edge AI Deployment: Manage, Observe, Update

4:00 오후 - 5:00 오후 (UTC)

Congratulations! You have deployed your AI models to the Edge. How do you make sure they are performing the way they were intended to? If they are not, what can you do about it? In this session we will deep dive into capturing observability data on Edge deployments even when the network connection is intermittent or has limited bandwidth and have the ability to return the data during specific time periods to run observability on this data. We will show how Data scientists in the Model Operations Center can monitor drift for models deployed for a specific edge location or a group of edge locations and take action on underperforming models by hot swapping in better performing models. Try this Computer Vision model and other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace (https://aka.ms/Wallaroo-Inference) and also try the Free Wallaroo.AI Community Edition (https://aka.ms/Wallaroo.AI-Free)

  • 형식:
  • alt##LivestreamLivestream

항목: 데이터 과학 및 Machine Learning

언어: 영어

주문형 보기

7월

16

화요일

2024

Deploying and Monitoring LLM Inference Endpoints

6:00 오후 - 7:00 오후 (UTC)

In this session we will dive into deploying LLMs to Production Inference Endpoints and then putting in place automated monitoring metrics and alerts to help track model performance and suppress potential output issues such as toxicity. We will also cover the process of optimizing LLMs using RAG, for relevant, accurate, and useful outputs. You will leave this session with a comprehensive understanding about deploying LLMs to production and monitoring the models for issues such as Toxicity, relevance, and accuracy. Try this other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace and also try the Free Wallaroo.AI Community Edition

  • 형식:
  • alt##LivestreamLivestream

항목: 데이터 과학 및 Machine Learning

언어: 영어

주문형 보기

10월

31

목요일

2024

Building Custom LLMs for Production Inference Endpoints - Wallaroo.ai

6:00 오후 - 7:00 오후 (UTC)

In this session we will dive into the details for how to build, deploy, and optimize custom Large Language Models (LLMs) for production inference environments This session will cover the key steps for Custom LLMs (LLama), focusing on: Why custom LLM's? Inference Performance Optimization Harmful language Detection

  • 형식:
  • alt##LivestreamLivestream

항목: 데이터 과학 및 Machine Learning

언어: 영어

주문형 보기

질문이 있는 경우 다음으로 문의하세요. reactor@microsoft.com