跳至主要内容
megaphone icon

AI Dev Days Hackathon

Build production‑ready AI during our global hack using Microsoft’s latest AI, agent, and dev tools to solve real‑world problems and compete for prizes.

学习、联系、构建

Microsoft Reactor

加入 Microsoft Reactor 并实时与开发人员互动

准备好开始使用 AI 和最新技术了吗? Microsoft Reactor 提供活动、培训和社区资源,帮助开发人员、企业家和初创公司利用 AI 技术等。 快加入我们吧!

学习、联系、构建

Microsoft Reactor

加入 Microsoft Reactor 并实时与开发人员互动

准备好开始使用 AI 和最新技术了吗? Microsoft Reactor 提供活动、培训和社区资源,帮助开发人员、企业家和初创公司利用 AI 技术等。 快加入我们吧!

返回

Python + Agents: Monitoring and evaluating agents

26 二月, 2026 | 6:30 下午 - 7:30 下午 (UTC) 协调世界时

  • 形式:
  • alt##Livestream直播

主题: 代理

语言: 英语

In the third session of our Python + Agents series, we’ll focus on two essential components of building reliable agents: observability and evaluation.

We’ll begin with observability, using OpenTelemetry to capture traces, metrics, and logs from agent actions. You'll learn how to instrument your agents and use a local Aspire dashboard to identify slowdowns and failures.

From there, we’ll explore how to evaluate agent behavior using the Azure AI Evaluation SDK. You’ll see how to define evaluation criteria, run automated assessments over a set of tasks, and analyze the results to measure accuracy, helpfulness, and task success.

By the end of the session, you’ll have practical tools and workflows for monitoring, measuring, and improving your agents—so they’re not just functional, but dependable and verifiably effective.

先决条件

To follow along with the live examples, sign up for a free GitHub account. If you are brand new to generative AI with Python, start with our our 9-part Python + AI series, which covers LLMs, embedding models, RAG, tool calling, MCP, and more.

主讲人

已注册并且需要取消? 取消注册

注册

使用 Microsoft 帐户登录

登录

或输入你的电子邮件地址进行注册

*

注册参加此活动即表示你同意遵守 Microsoft Reactor 行为准则.

本页面的部分内容可能是机器翻译或人工智能翻译.