Skip to main content

DISCOVER, CONNECT, GROW

Microsoft Reactor

Join Microsoft Reactor and engage with startups and developers live

Ready to get started with AI?  Microsoft Reactor provides events, training, and community resources to help startups, entrepreneurs and developers build their next business on AI technology. Join us!

DISCOVER, CONNECT, GROW

Microsoft Reactor

Join Microsoft Reactor and engage with startups and developers live

Ready to get started with AI?  Microsoft Reactor provides events, training, and community resources to help startups, entrepreneurs and developers build their next business on AI technology. Join us!

Go back

Deploying and Monitoring LLM Inference Endpoints

16 July, 2024 | 6:00 PM - 7:00 PM (UTC) Coordinated Universal Time

  • Format:
  • alt##LivestreamLivestream

Topic: Data Science & Machine Learning

Language: English

In this session we will dive into deploying LLMs to Production Inference Endpoints and then putting in place automated monitoring metrics and alerts to help track model performance and suppress potential output issues such as toxicity.

We will also cover the process of optimizing LLMs using RAG, for relevant, accurate, and useful outputs.

You will leave this session with a comprehensive understanding about deploying LLMs to production and monitoring the models for issues such as Toxicity, relevance, and accuracy.

Try this other common AI use cases using the Wallaroo.AI Azure Inference Server Freemium Offer on Azure Marketplace and also try the Free Wallaroo.AI Community Edition

  • LLM

Speakers

Already registered and need to cancel? Cancel registration

Registration

Sign in with your Microsoft Account

Sign in

Or enter your email address to register

*

By registering for this event you agree to abide by the Microsoft Reactor Code of Conduct.

For questions please contact us at reactor@microsoft.com