콘텐츠 기본 건너뛰기

Python + AI

Microsoft Reactor에 가입하고 스타트업 및 개발자와 실시간 소통

AI를 시작할 준비가 되셨나요? Microsoft Reactor는 스타트업, 기업가 및 개발자가 AI 기술에 대한 다음 비즈니스를 구축할 수 있도록 이벤트, 교육 및 커뮤니티 리소스를 제공합니다. 참여해 주세요!

Python + AI

Microsoft Reactor에 가입하고 스타트업 및 개발자와 실시간 소통

AI를 시작할 준비가 되셨나요? Microsoft Reactor는 스타트업, 기업가 및 개발자가 AI 기술에 대한 다음 비즈니스를 구축할 수 있도록 이벤트, 교육 및 커뮤니티 리소스를 제공합니다. 참여해 주세요!

돌아가기

Python + AI

  • 형식:
  • alt##LivestreamLivestream

항목: 인텔리전트 애플리케이션

언어: 영어

  • 이 시리즈의 이벤트:
  • 6

Want to build applications with generative AI in Python? Join our six-part series on Python and AI!

We'll start with a tour of Large Language Models (LLMs) and vector embedding models, dive into popular techniques like Retrieval-Augmented Generation (RAG) and Structured outputs, bring in multimodal models to work with images, and culminate with AI safety, quality, and evaluation.

Throughout all our sessions, we'll use Python for our live examples and share all the code so that you can run them yourself. You can even follow-along live, thanks to GitHub Models and GitHub Codespaces.

You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

Want to check out this series in Spanish? Register here

예정된 이벤트

자세한 내용을 알아보고 개별 이벤트에 등록하려면 아래 이벤트를 클릭합니다.

의 모든 시간 - 협정 세계시

3월

11

화요일

2025

Python + AI: Large Language Models

4:30 오후 - 5:30 오후 (UTC)

Join us for the first session in our Python + AI series! In this session, we'll talk about Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We'll use Python to interact with LLMs using popular packages like the OpenAI SDK and Langchain. We'll experiment with prompt engineering and few-shot examples to improve our outputs. We'll show how to build a full stack app powered by LLMs, and explain the importance of concurrency and streaming for user-facing AI apps. Follow-along live, thanks to GitHub Models and GitHub Codespaces. If you'd like to follow along with the live examples, make sure you've got a GitHub account. You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

  • 형식:
  • alt##LivestreamLivestream

항목: 인텔리전트 애플리케이션

언어: 영어

세부 정보

3월

13

목요일

2025

Python + AI: Vector embeddings

4:30 오후 - 5:30 오후 (UTC)

In our second session of the Python + AI series, we'll dive into a different kind of model: the vector embedding model. A vector embedding is a way to encode a text or image as an array of floating point numbers. Vector embeddings make it possible to perform similarity search on many kinds of content. In this session, we'll explore different vector embedding models, like the OpenAI text-embedding-3 series, with both visualizations and Python code. We'll compare distance metrics, use quantization to reduce vector size, and try out multimodal embedding models. Follow-along live, thanks to GitHub Models and GitHub Codespaces. If you'd like to follow along with the live examples, make sure you've got a GitHub account. You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

  • 형식:
  • alt##LivestreamLivestream

항목: 인텔리전트 애플리케이션

언어: 영어

세부 정보

3월

18

화요일

2025

Python + AI: Retrieval Augmented Generation

4:30 오후 - 5:30 오후 (UTC)

In our third Python + AI session, we'll explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that sends context to the LLM so that it can provide well-grounded answers for a particular domain. The RAG approach can be used with many kinds of data sources like CSVs, webpages, documents, databases. In this session, we'll walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search. Follow-along live, thanks to GitHub Models and GitHub Codespaces. If you'd like to follow along with the live examples, make sure you've got a GitHub account. You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

  • 형식:
  • alt##LivestreamLivestream

항목: 인텔리전트 애플리케이션

언어: 영어

세부 정보

3월

20

목요일

2025

Python + AI: Vision models

4:30 오후 - 5:30 오후 (UTC)

Our fourth stream in the Python + AI series is all about vision models! Vision models are LLMs that can accept both text and images, like GPT 4o and GPT 4o-mini. You can use those models for image captioning, data extraction, question-answering, classification, and more! We'll use Python to send images to vision models, build a basic chat app with image upload, and even use vision models inside a RAG application. Follow-along live, thanks to GitHub Models and GitHub Codespaces. If you'd like to follow along with the live examples, make sure you've got a GitHub account. You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

  • 형식:
  • alt##LivestreamLivestream

항목: 인텔리전트 애플리케이션

언어: 영어

세부 정보

3월

25

화요일

2025

Python + AI: Function calling & structured outputs

4:30 오후 - 5:30 오후 (UTC)

In our fifth stream of the Python + AI series, we're going to explore the two main ways to get LLMs to output structured responses that adhere to a schema: function calling and structured outputs. We'll start with function calling, which is the most well supported way to get structured responses, and discuss its drawbacks. Then we'll focus on the new structured outputs mode available in OpenAI models, which can be used with Pydantic models and even used in combination with function calling. Our examples will demonstrate the many ways you can use structured responses, like entity extraction, classification, and agentic workflows. Follow-along live, thanks to GitHub Models and GitHub Codespaces. If you'd like to follow along with the live examples, make sure you've got a GitHub account. You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

  • 형식:
  • alt##LivestreamLivestream

항목: 인텔리전트 애플리케이션

언어: 영어

세부 정보

3월

27

목요일

2025

Python + AI: Quality & Safety

4:30 오후 - 5:30 오후 (UTC)

In our final session of the Python + AI series, we're culminating with a crucial topic: how to use AI safely, and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. Our focus will be on Azure tools that make it easier to put safe AI systems into production. We'll show how to configure the Azure AI Content Safety system when working with Azure AI models, and how to handle those errors in Python code. Then we'll use the Azure AI Evaluation SDK to evaluate the safety and quality of the output from our LLM. Follow-along live, thanks to GitHub Models and GitHub Codespaces. If you'd like to follow along with the live examples, make sure you've got a GitHub account. You can also join a weekly office hours to ask any questions that don't get answered in the chat, in our AI Discord.

  • 형식:
  • alt##LivestreamLivestream

항목: 인텔리전트 애플리케이션

언어: 영어

세부 정보

스피커

이 시리즈에 등록

Microsoft 계정으로 로그인

로그인

또는 등록할 전자 메일 주소를 입력합니다.

*

이 이벤트에 등록하면 다음을 준수하기로 동의합니다. Microsoft Reactor 행동 강령.

질문이 있는 경우 다음으로 문의하세요. reactor@microsoft.com