Pular para o conteúdo principal

Partes desta página podem ser traduzidas por máquina ou IA.

Python + AI: Level Up

Ingresse no Microsoft Reactor e interaja com startups e desenvolvedores ao vivo

Pronto para começar a usar a IA? O Microsoft Reactor fornece eventos, treinamento e recursos da comunidade para ajudar startups, empreendedores e desenvolvedores a criar seus próximos negócios em tecnologia de IA. Junte-se a nós!

Python + AI: Level Up

Ingresse no Microsoft Reactor e interaja com startups e desenvolvedores ao vivo

Pronto para começar a usar a IA? O Microsoft Reactor fornece eventos, treinamento e recursos da comunidade para ajudar startups, empreendedores e desenvolvedores a criar seus próximos negócios em tecnologia de IA. Junte-se a nós!

Voltar

Python + AI: Level Up

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

  • Eventos nesta série:
  • 9

Want to build applications with generative AI in Python? Join our nine-part series on Python and AI!

We'll start with a tour of Large Language Models (LLMs) and vector embedding models, dive into popular techniques like Retrieval-Augmented Generation (RAG) and bring in multimodal models to work with images. Then we'll dive into the world of tool calling, MCP servers, and agent frameworks. Finally, we'll learn how to evaluate the safety and quality of AI-powered apps.

Throughout all our sessions, we'll use Python for our live examples and share all the code so that you can run them yourself. You can even follow-along live, thanks to GitHub Models and GitHub Codespaces.

You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

Habla español? Tendremos una serie para hispanohablantes!

Próximos eventos

Clique em um evento abaixo para saber mais e se registrar em eventos individuais.

Todos os horários em - Tempo Universal Coordenado UTC

out

07

terça-feira

2025

Python + AI: Large Language Models

5:00 PM - 6:00 PM (UTC)

Join us for the first session in our Python + AI series! In this session, we'll talk about Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We'll use Python to interact with LLMs using popular packages like the OpenAI SDK and Langchain. We'll experiment with prompt engineering and few-shot examples to improve our outputs. We'll also show how to build a full stack app powered by LLMs, and explain the importance of concurrency and streaming for user-facing AI apps.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

out

08

quarta-feira

2025

Python + AI: Vector embeddings

5:00 PM - 6:00 PM (UTC)

In our second session of the Python + AI series, we'll dive into a different kind of model: the vector embedding model. A vector embedding is a way to encode a text or image as an array of floating point numbers. Vector embeddings make it possible to perform similarity search on many kinds of content. In this session, we'll explore different vector embedding models, like the OpenAI text-embedding-3 series, with both visualizations and Python code. We'll compare distance metrics, use quantization to reduce vector size, and try out multimodal embedding models.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

out

09

quinta-feira

2025

Python + AI: Vision models

5:00 PM - 6:00 PM (UTC)

Our third stream in the Python + AI series is all about vision models! Vision models are LLMs that can accept both text and images, like GPT 4o and 4o-mini. You can use those models for image captioning, data extraction, question-answering, classification, and more! We'll use Python to send images to vision models, build a basic chat-on-images app, and build a multimodal search engine.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

out

14

terça-feira

2025

Python + AI: Retrieval Augmented Generation

5:00 PM - 6:00 PM (UTC)

In our fourth Python + AI session, we'll explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that sends context to the LLM so that it can provide well-grounded answers for a particular domain. The RAG approach can be used with many kinds of data sources like CSVs, webpages, documents, databases. In this session, we'll walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

out

15

quarta-feira

2025

Python + AI: Structured outputs

5:00 PM - 6:00 PM (UTC)

In our fifth stream of the Python + AI series, we'll discover how to get LLMs to output structured responses that adhere to a schema. In Python, all we need to do is define a @dataclass or a Pydantic BaseModel, and we get validated output that meets our needs perfectly. We'll focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples will demonstrate the many ways you can use structured responses, like entity extraction, classification, and agentic workflows.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

out

16

quinta-feira

2025

Python + AI: Quality and safety

5:00 PM - 6:00 PM (UTC)

Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

out

21

terça-feira

2025

Python + AI: Tool calling

5:00 PM - 6:00 PM (UTC)

Now that we're more than halfway through our Python + AI series, we're covering a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

out

22

quarta-feira

2025

Python + AI: AI agents

5:00 PM - 6:00 PM (UTC)

For the penultimate session of our Python + AI series, we're building AI agents! We'll use many of the most popular Python AI agent frameworks: Langgraph, Semantic Kernel, Autogen, Pydantic AI, and more. Our agents will start simple and then ramp up in complexity, demonstrating different architectures like hand-offs, round-robin, supervisor, graphs, and ReAct.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

out

23

quinta-feira

2025

Python + AI: Model Context Protocol

5:00 PM - 6:00 PM (UTC)

In the final session of our Python + AI series, we're diving into the hottest technology of 2025: MCP, Model Context Protocol. This open protocol makes it easy to extend AI agents and chatbots with custom functionality, to make them more powerful and flexible. We'll show how to use the official Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we'll build our own MCP client to consume the server. Finally, we'll discover how easy it is to point popular AI agent frameworks like Langgraph, Pydantic AI, and Semantic Kernel at MCP servers. With great power comes great responsibility, so we will briefly discuss the many security risks that come with MCP, both as a user and developer.

  • Formato:
  • alt##LivestreamTransmissão ao vivo

Tópico: Codificação, idiomas e estruturas

Linguagem: Inglês

Detalhes

Palestrantes

Registre-se nesta série

Entre com sua conta Microsoft

Entrar

Ou insira seu endereço de email para se registrar

*

Ao se registrar para este evento, você concorda em cumprir o Código de conduta do Microsoft Reactor.

Ao se registrar para este evento, você concorda em cumprir o Código de conduta do Microsoft Reactor.