Pamela Fox
Microsoft
LEARN, CONNECT, BUILD
Ready to get started with AI and the latest technologies? Microsoft Reactor provides events, training, and community resources to help developers, entrepreneurs and startups build on AI technology and more. Join us!
LEARN, CONNECT, BUILD
Ready to get started with AI and the latest technologies? Microsoft Reactor provides events, training, and community resources to help developers, entrepreneurs and startups build on AI technology and more. Join us!
26 February, 2026 | 6:30 PM - 7:30 PM (UTC) Coordinated Universal Time
Topic: Agents
Language: English
In the third session of our Python + Agents series, we’ll focus on two essential components of building reliable agents: observability and evaluation.
We’ll begin with observability, using OpenTelemetry to capture traces, metrics, and logs from agent actions. You'll learn how to instrument your agents and use a local Aspire dashboard to identify slowdowns and failures.
From there, we’ll explore how to evaluate agent behavior using the Azure AI Evaluation SDK. You’ll see how to define evaluation criteria, run automated assessments over a set of tasks, and analyze the results to measure accuracy, helpfulness, and task success.
By the end of the session, you’ll have practical tools and workflows for monitoring, measuring, and improving your agents—so they’re not just functional, but dependable and verifiably effective.
To follow along with the live examples, sign up for a free GitHub account. If you are brand new to generative AI with Python, start with our our 9-part Python + AI series, which covers LLMs, embedding models, RAG, tool calling, MCP, and more.
Speakers
Already registered and need to cancel? Cancel registration
This event is part of the Python + Agents: Building AI agents and workflows with Agent Framework Series.
Click here to visit the Series Page where you could see all the upcoming and on-demand events.