跳到主要內容
擴音器圖示

Microsoft Build 2026

深入探討 Microsoft Build 的真實程式碼與系統

學習、聯繫、建置

Microsoft Reactor

加入 Microsoft Reactor 並與開發人員即時互動

準備好開始使用 AI 和最新技術嗎? Microsoft Reactor 提供活動、訓練和社群資源,協助開發人員、企業家和初創公司建置 AI 技術等等。 加入我們!

學習、聯繫、建置

Microsoft Reactor

加入 Microsoft Reactor 並與開發人員即時互動

準備好開始使用 AI 和最新技術嗎? Microsoft Reactor 提供活動、訓練和社群資源,協助開發人員、企業家和初創公司建置 AI 技術等等。 加入我們!

返回

Python + AI: Quality & Safety

27 3月, 2025 | 4:30 下午 - 5:30 下午 (UTC) 國際標準時間

  • 格式:
  • alt##Livestream線上直播

主題: AI 應用程式

語言: 英文

In our final session of the Python + AI series, we're culminating with a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs.

There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production.

We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM.

Follow-along live, thanks to GitHub Models and GitHub Codespaces.
If you'd like to follow along with the live examples, make sure you've got a GitHub account.

You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

Check out more resources here

必要條件

Want to check out this series in Spanish? Register here

  • AI
  • Python

演講者

相關活動

下列活動也可能對您感興趣。 務必造訪我們的 Reactor 首頁 以查看所有可用的活動。

本頁面的一部分可能是機器翻譯或人工智能翻譯的.