Skip to main content

POSETTE 2026

Join Microsoft Reactor and engage with developers live

Ready to get started with AI and the latest technologies? Microsoft Reactor provides events, training, and community resources to help developers, entrepreneurs and startups build on AI technology and more. Join us!

POSETTE 2026

Join Microsoft Reactor and engage with developers live

Ready to get started with AI and the latest technologies? Microsoft Reactor provides events, training, and community resources to help developers, entrepreneurs and startups build on AI technology and more. Join us!

Go back

POSETTE 2026

  • Format:
  • alt##LivestreamLivestream

Topic: Open Source

Language: English

  • Events in this Series:
  • 4

POSETTE: An Event for Postgres (pronounced /Pō-zet/) is a free and virtual developer event. The name POSETTE stands for Postgres Open Source Ecosystem Talks Training & Education. Happening Jun 16-18, 2026, join us for 4 unique livestreams to hear from open source users and experts in many aspects of the PostgreSQL ecosystem—including Azure. Come learn what you can do with the world’s most advanced open source relational database—from the nerdy to the sublime.

Full schedule & speakers can be found here. In the meantime, you can catch up on last year’s talks at aka.ms/posette-playlist.

More info on POSETTE can be found on the POSETTE website.

Upcoming Events

Click on an event below to learn more and register for individual events.

All times in - Coordinated Universal Time

Jun

16

Tuesday

2026

POSETTE: An Event for Postgres 2026 – Livestream 1

3:00 PM - 9:00 PM (UTC)

Now in its 5th year, POSETTE: An Event for Postgres (pronounced /Pō-zet/) is a free and virtual developer event. The name POSETTE stands for Postgres Open Source Ecosystem Talks Training & Education. Happening Jun 16-18, 2026, join us for 4 unique livestreams to hear from open source users and experts in many aspects of the PostgreSQL ecosystem—including on Azure. Come learn what you can do with the world’s most advanced open source relational database—from the nerdy to the sublime. Come chat with POSETTE speakers & other community members on the #posetteconf channel in the Microsoft Open Source Discord before, during, and after the event. Full schedule & speakers for Livestream 1 below. In the meantime, you can catch up on last year’s talks at https://aka.ms/posette-playlist. Learn more about POSETTE 2026 at https://posetteconf.com/2026/ Livestream 1 Agenda |Session Title | Session Description | Speaker | | :---------------- | :---------------------- | :----------------------- | | | | | | Keynote: Driving Postgres forward at Microsoft | Microsoft is deeply invested in Postgres. Both in the upstream open source project and as a premier cloud database offering. In this keynote, Charles Feddersen and Affan Dar, who lead PostgreSQL engineering at Azure, will share how Microsoft is driving Postgres forward today: not just as a database to run in the cloud, but as an open source project we’re investing in for the long run. They’ll talk about what that looks like in practice, including Microsoft’s work in the Postgres community, recent contributions into core Postgres, and how those efforts shape the Azure Database for PostgreSQL managed service. The session will also touch on database migrations, developer tools, and why we created Azure HorizonDB. This is a technical, inside look at why Postgres matters to Microsoft, what the team has been focused on recently, and how that work connects back to the broader Postgres ecosystem. | Affan Dar, Charles Feddersen | | JSON in PostgreSQL - evil data type or just needs to be tamed? | You heard that PostgreSQL also supports the JSON data type, and you wanted to enjoy the dynamism of schema freedom mixed with the benefits of a relational database. You wanted a flexible data type combined with columns with strong types, with relationships between tables, and with constraints to guarantee data integrity. But now that you have integrated JSON deep in your schema design, you start observing odd behaviors, unpredictable performance, and unused indexes. You start to wonder if you haven’t introduced an evil data type disguised as a friendly and flexible object. Maybe there are things you could do in Postgres to make things run faster. Are some indexes better than others? What about table partitioning? And what about TOAST tables? Do they play a role accessing the data stored in JSON? Or …maybe the B in JSONB stands for Beast? Can you tame the JSONB objects?In this talk we will review schema-design decisions when using JSON/JSONB in PostgreSQL, with some tips and tricks, based on experience working with real case scenarios. We will work through a case study to create a pragmatic view of working with JSON/JSONB in PostgreSQL. | Boriss Mejias | | random_page_cost in Postgres - why the default is 4.0 and should you lower it? | random_page_cost is one of the basic parameters affecting query planning in PostgreSQL. It expresses the cost of random I/O in relation to sequential reads. The lower the value, the "cheaper" the operations performing random I/O - typically Index Scans. Since 2002 the default value is set to 4.0, but how did we pick it and why didn't it change over the ~25 years? The storage changed a lot over the years. And should you reduce it when running on SSDs? | Tomas Vondra | | Fuzzing PostgreSQL | Fuzzing is a simple but powerful technique for discovering edge-case bugs in large, stateful systems like PostgreSQL.This talk shows how to apply it to Postgres’ client library libpq which handles every network connection before the server sees a query.We’ll walk through building minimal harnesses, generating and mutating protocol inputs, and reasoning about what makes fuzzing effective on complex C codebases.The session is meant as a practical guide: how to start fuzzing a Postgres-related project, what challenges to expect, and what kind of issues you can realistically uncover along the way.In this session you will learn: * what fuzzing is and why it finds bugs other techniques miss* which PostgreSQL surfaces make good fuzzing targets and why* how to apply fuzzing to Postgres networking components (libpq)If you’re a PostgreSQL developer, this talk will add another tool for improving the stability and security of the projects you build. | Adam Wolk | | PostgreSQL Design Patterns | PostgreSQL has a bewildering array of features, many which can make an application developers life easier and reduce complexity in their application!We'll take a look at a range of use cases and some PostgreSQL Design Patterns which can be used to help solve those, all based on real problems I've run into over the years.Covering patterns to simplify your application architecture, to build your application logic faster and to help prevent disasters from happening!Taking a look at use cases: - Event Scheduling and Booking - Queuing and Task Execution - Text Search and Fuzzy Matching - Category and Tag Searching - Geolocation - Unknown dataAnd more!This talk introduces a huge range of PostgreSQL features that are building blocks of patterns you can make use of.  It's very much showing the art of the possible, and letting you choose how to use it. | Chris Ellis | | An MCP for your Postgres DB | Model Context Protocol (MCP) is an open standard that lets us connect LLMs to external systems through explicit, discoverable tools. When we build MCP servers that expose a PostgreSQL database, our design choices directly influence how accurately, efficiently, and predictably LLMs translate user input into queries.In this talk, we’ll design MCP servers for PostgreSQL using Python and the FastMCP SDK, focusing on how different tool designs shape query behavior. We’ll examine common failure modes that arise when LLMs interact with databases—such as SQL injection, accidental DELETE or UPDATE operations, unbounded or expensive queries, and mismatches between user intent and executed SQL—and how various approaches either mitigate or amplify these issues.We’ll compare multiple styles of MCP tool arguments, from free‑form SQL to structured, typed inputs. We’ll explore how MCP elicitation can improve tool success by allowing users to clarify intent in ambiguous or risky scenarios. Finally, we’ll also explore the tool selection problem: how to design MCP servers that expose multiple tables or databases in a way that helps LLMs reliably choose the right tool for the right job. | Pamela Fox | | PostgreSQL Tooling Across AI Editors and Agents | Developers interact with PostgreSQL through a mix of editors, terminals, query consoles, plan viewers, and monitoring tools. As AI-native editors and agent-driven tools become part of everyday workflows, it’s increasingly valuable for PostgreSQL capabilities to be available wherever developers and agents operate, building on the strong foundation established in the VS Code PostgreSQL extension.In this session, we’ll show how PostgreSQL connection management, query execution, schema analysis, plan inspection, and performance insights are being extended from the VS Code PostgreSQL extension to Cursor and the GitHub Copilot CLI using an MCP server as the shared interface. Rich editor environments like VS Code and Cursor support interactive query execution, visualized results, and AI-assisted plan inspection and tuning. The same MCP-backed foundation enables the GitHub Copilot CLI to establish connections, run queries, analyze schemas, inspect plans, and reason about performance without a visual UI.We’ll also discuss how this architecture positions PostgreSQL tooling to continue expanding as new AI-driven environments emerge, allowing core database capabilities to surface naturally wherever AI developers work. | Matt McFarland | | Querying & Visualizing Graphs in Postgres with Apache AGE | Apache AGE lets you store graph data inside Postgres – but the usual database tools weren't built to show you what graphs look like. You can query nodes and edges by embedding Cypher inside SQL, but making sense of what comes back requires graph visualization.This talk covers what works and what fails when visualizing AGE query results. I'll walk through common problems: layouts that obscure rather than reveal, node-link diagrams that become unreadable at modest scale, and interaction patterns that break when graphs get dense. Then I'll show techniques that hold up – including when to reach for force-directed layouts versus layered or topology-aware approaches.The graph-in-Postgres model means you can pick the right model for the problem: relational for aggregations and filtering, graph for traversals and pattern matching. But graph query results need visualization approaches designed for connected data – techniques that reveal structure rather than flatten it back into rows. Drawing from fifteen years of building graph visualization tools, I'll show what that looks like in practice. | Christian Miles | | Choose the Right Azure Infrastructure to Improve Postgres Performance by Over 60% | Don’t know where to start when it comes to choosing your Postgres infrastructure on Azure? Join this session to learn about the latest infrastructure options on Azure. You'll come away with knowledge on how to quickly identify the hardware best suited for high Postgres performance at lower costs. Whether you're already deploying Postgres on Azure or considering it, this session is suited for those who want to continuously improve the developer and end user experience while also keeping FinOps happy. | Andrew Ruffin | | What I’ve Learned Teaching Postgres to 200+ field engineers at Microsoft | With a managed Postgres service like Azure Database for PostgreSQL, building the product is only half the job; the other half is making sure our technical field has the context and depth they need to support customers well. As the lead for PostgreSQL upskilling at Microsoft, I maintain the knowledge pipeline for our pre- and post- sales teams who work with some of the world’s most demanding enterprises.The Top 3 Field Friction Points: The most common misconceptions that lead to "anti-patterns" in customer environments, and how to proactively teach the "Postgres way" of solving them.Scaling the "Brown Bag" Culture: A blueprint for running bi-weekly technical deep dives and building a community of Postgres advocates that scales across a global organization.Whether you are a Lead Architect, an SRE Manager, or a Technical Advocate, you’ll walk away with a toolkit for elevating the technical bar and fostering a resilient Postgres culture within your own teams. And if nothing else, we’ll have some fun exploring the quirks of the elephant in the room! | Paula Berenguel | | PostgreSQL vs. SQL Server: Security Model Differences | Security is paramount in database management. If you are an SQL Server expert looking to learn PostgreSQL, it is essential to understand how PostgreSQL's security model differs from that of SQL Server. This talk will compare the security models of both database systems. Aimed at database administrators and developers, the presentation will highlight the key differences in how these systems handle user authentication, roles, and permissions.For example, did you know that:-SQL Server distinguishes between logins and users, whereas PostgreSQL uses a unified role-based system for authentication and authorization.-SQL Server offers predefined server and database roles, such as sysadmin, which provides a range of out-of-the-box permissions. Conversely, PostgreSQL includes default roles like pg_read_all_data, designed to simplify standard permission sets.-SQL Server allows the creation of custom roles with flexible permission assignments. PostgreSQL's roles enable inheriting permissions from other roles and support complex role hierarchies.Understanding these differences and others discussed during the session will enhance your grasp of the security model distinctions between SQL Server and PostgreSQL, enabling you to implement security best practices in either environment. | Taiob Ali |

  • Format:
  • alt##LivestreamLivestream

Topic: Open Source

Language: English

Details

Jun

17

Wednesday

2026

POSETTE: An Event for Postgres 2026 – Livestream 2

6:00 AM - 12:00 PM (UTC)

Now in its 5th year, POSETTE: An Event for Postgres (pronounced /Pō-zet/) is a free and virtual developer event. The name POSETTE stands for Postgres Open Source Ecosystem Talks Training & Education. Happening Jun 16-18, 2026, join us for 4 unique livestreams to hear from open source users and experts in many aspects of the PostgreSQL ecosystem—including on Azure. Come learn what you can do with the world’s most advanced open source relational database—from the nerdy to the sublime. Come chat with POSETTE speakers & other community members on the #posetteconf channel in the Microsoft Open Source Discord before, during, and after the event. Full schedule & speakers for Livestream 1 below. In the meantime, you can catch up on last year’s talks at https://aka.ms/posette-playlist. Learn more about POSETTE 2026 at https://posetteconf.com/2026/ Livestream 2 Agenda |Session Title | Session Description | Speaker | | :---------------- | :---------------------- | :----------------------- | | | | | | Postgres 19 Hackers Panel: What’s In, What’s Out, & What’s Next | What happens in the lead-up to a Postgres feature freeze, and why do some patches make the cut while others stall? Join four of the project’s open source contributors—Álvaro Herrera, Heikki Linnakangas, Melanie Plageman, and Thomas Munro—for a conversation about Postgres 19. With a combined ~75 years of experience hacking on Postgres, this panel will talk about some of the successful collaborations that pushed key features into Postgres 19, as well as the "missed the boat" list: features they wished had made the PG19 freeze (but are still in the works for the future.)You’ll also get an early look at big-ticket items being engineered for Postgres 20 and beyond (yes, including multithreading.) From the technical hurdles of working on Postgres to the personal reasons that keep these hackers coming back to this database, you’ll get a peek into what’s coming next with Postgres.Note: At the time of the recording of this keynote panel in April 2026, Postgres 19 will have just reached feature freeze and is expected to reach GA in the Sep/Oct 2026 timeframe. | Álvaro Herrera, Heikki Linnakangas, Melanie Plageman, Thomas Munro | | pg_lake: Postgres as a lakehouse | When Postgres is bad at something, we can make it good at it through extensions. Postgres is not a good analytics database. Its analytical query performance is relatively, it has no facilities for interacting with object storage, and only supports basic CSV as a file format.Pg_lake is a set of open source Postgres extensions that add the ability to query/import/export raw data files in your data lake via simple SQL commands commands, and create and manage Iceberg tables with high analytical query performance. It enables you to use Postgres as a versatile data "lakehouse".This talk describes how pg_lake extends Postgres and introduces a new query engine (by "de-embedding" DuckDB), a new table storage engine (Iceberg), and seamlessly integrates them with all existing Postgres features and transactions in a production-ready way. We also show various new patterns that have emerged for using pg_lake, and how it combines with the pg_incremental extension. | Marco Slot | | Migrating VLDBs from Oracle to Azure Database for PostgreSQL | Migrating very large databases (VLDBs) to PostgreSQL becomes significantly more complex when the target is a managed cloud service. This session presents proven, field‑tested strategies for migrating multi‑terabyte Oracle workloads to Azure Database for PostgreSQL – Flexible Server with minimal downtime and predictable performance.We’ll cover the full migration lifecycle: validating schema compatibility, planning WAL throughput and storage layout, optimizing network and bulk‑load operations, and using logical replication to achieve near‑zero‑downtime cutovers. You’ll learn practical techniques for handling partitions, large objects, and long‑running transactions at scale, along with methods to avoid common VLDB pitfalls such as bloat, autovacuum stalls, slow COPY performance, and resource throttling caused by misaligned compute or IOPS.Real customer examples will highlight what works, what to avoid, and how to design stable, high‑performance deployments from day one. You will leave with a repeatable VLDB migration checklist and tuning templates ready for immediate use. | Adithya Kumaranchath | | Taming Unpredictable PostgreSQL Workloads with Azure HorizonDB | Running PostgreSQL today often means choosing between overspending on idle capacity or risking performance dips when traffic suddenly spikes. Teams face unpredictable workloads—burst‑heavy APIs, event‑driven pipelines, seasonal peaks, analytics surges, and multi‑tenant SaaS patterns that don’t respect fixed sizing. Traditional provisioning forces a compromise: pay for headroom you rarely use, or manually resize under pressure when demand changes.This session looks at a more adaptive PostgreSQL consumption approach where compute aligns with workload intensity instead of static guesses. Sudden surges sustain performance without manual resizing, while quiet periods avoid burning budget on idle resources. The goal: reduce operational friction, improve predictability under load, and simplify day‑to‑day management—without changing applications or re‑architecting your environment.Attendees will leave with a clear mental model of why variability is so hard to plan for, how consumption‑aligned compute mitigates those challenges, and what patterns make PostgreSQL deployments more resilient, cost‑efficient, and easier to operate. | Silvano Coriani | | The Hitchhiker’s Guide to PostgreSQL Hacking: Don’t Panic, Just Start Small | Hacking on PostgreSQL can feel overwhelming: a massive codebase, a rigorous review culture, and a patch queue that never seems to shrink. Many aspiring contributors ask the same questions: Where do I begin? What should I work on?This talk offers a practical roadmap for entering PostgreSQL development. Rather than starting with large features or ambitious rewrites, we focus on a disciplined approach: reviewing patches, fixing small bugs, testing edge cases, and building intuition for the codebase.We explore how small improvements—clarifying a review comment, or isolating a bug—compound into deeper understanding and meaningful contributions. We will also discuss the psychological side of hacking: navigating imposter syndrome, learning from reviews, and turning feedback into momentum.PostgreSQL is not conquered in a single patch. It is learned incrementally. This talk demonstrates how sustained, focused effort transforms confusion into contribution.What Attendees Will Learn• How to choose a first patch• How patch review builds architectural understanding• How small changes lead to larger infrastructure work• How to navigate PostgreSQL’s review culture effectively• How to turn feedback into growth instead of frustration | Xuneng Zhou | | From trust to Tokens: A Short History of PostgreSQL Authentication | PostgreSQL offers a surprisingly large number of authentication methods—but most users only encounter one or two of them, often without understanding why they exist.In this short talk, we take a fast, story driven tour through the evolution of PostgreSQL authentication. Starting with early Unix centric assumptions (trust, ident, peer), we move through password authentication, enterprise integrations like LDAP and Kerberos, and end with modern identity driven approaches such as certificate  and token based authentication.Rather than listing every option, this talk focuses on key inflection points: what problem PostgreSQL was solving at each stage, what trade offs were made, and how those decisions still affect real world deployments today.Attendees will leave with a clear mental model of PostgreSQL authentication—enough to choose wisely, avoid common mistakes, and understand where the ecosystem is heading. | Murat Tuncer | | What's new with constraints in Postgres 18 | PostgreSQL 18 introduces temporal keys, NOT ENFORCED constraints and promotes NOT NULL to a first-class constraint. Constraint handling for partitioned tables has also improved. In this session, we’ll walk through these changes with practical examples and finish with a sneak peek at what’s coming in PostgreSQL 19. | Gülçin Yıldırım Jelínek | | PostgreSQL queues done right with PgQ | Modern applications often rely on message queues - for  background jobs, data pipelines, notifications, and event-driven architectures. Using something external like Kafka, Redis, RabbitMQ, etc increases operational complexity and introduces new failure modes. It all could be avoided by keeping a message queue in a database.Quick research on the internet shows that developers commonly are trying to engineer the database queue based on SELECT … FOR UPDATE SKIP LOCKED (available since 9.5). This approach works reasonably well under small load, and spectacularly falls apart if subscribers can’t keep up with publishing rate. PostgreSQL can do better - and in fact, it already did. PgQ is PostgreSQL extension that provides generic, high-performance lockless queue with simple SQL.In this talk, we start with why common SELECT … FOR UPDATE SKIP LOCKED approaches fall apart under load, and how PgQ quietly solved those problems a couple decades ago. Then we take a deep look at PgQ internals: snapshot-based event reads, transaction-ordered delivery, and how PgQ gets away with just a single index to achieve high throughput and consistency. Finally, we will discuss practical patterns for running PgQ on managed PostgreSQL services where this extension is typically not available. | Alexander Kukushkin | | Building safety tooling for risk-free AI tuning of Postgres: Fast cars need fast brakes | Optimizing your database with AI is a tantalizing prospect, but how can we make sure to do this in a risk-free manner? In this talk, I will share my experience with building safeguards and guardrails for automated PostgreSQL tuning to help you sleep well at night — the better the safety net, the more freely we can let the agent work to improve the system.I will walk through the tried and tested safety patterns: memory and performance monitoring, and validation techniques that ensure every change is safe. The goal is simple — get the performance gains you want while minimizing risk to your system. Whether you are considering automated tuning or building your own tools, building for safety should always be the highest priority. | Mohsin Ejaz | | Logical Decoding Protocol V2: Streaming Transactions, Schema Changes, Backfills and more | My talk will examine the practical details of implementing a logical decoding consumer, drawing from my experience in building a client in Go using pglogrepl and from studying production CDC systems like PeerDB and Dolt. Since there are many talks, blog posts and docs that provide overviews, I want to specifically cover the non-obvious and undocumented tidbits on logical decoding that I encountered. Specifically:- Protocol V2 streaming semantics: handling interleaved transaction chunks, the StreamStart/StreamStop/StreamCommit message sequence, and why duplicate StreamAbort messages occur- TOAST column handling: the 'u' (unchanged) tuple type, backfill strategies, and the REPLICA IDENTITY tradeoffs different systems make- DDL & Schema change detection: reactive vs proactive, using RelationMessage deltas rather than DDL parsing, and how PeerDB propagates column additions to destinations- Initial data sync: leveraging the exported snapshot from slot creation for consistent backfills without race conditions- And more...The talk assumes some familiarity with PG replication concepts and is aimed at developers building or maintaining CDC pipelines.I have been writing about PostgreSQL on my blog for several years, this will be my first conference talk. | Brandon Mochama | | PostgreSQL Generated Columns by Example | PostgreSQL generated columns are a powerful feature, and recent releases have significantly expanded what they can do. With PostgreSQL 18, generated columns are now virtual by default, while still allowing stored behavior, introducing new trade-offs around performance, storage, and query behavior.In this talk, we explore PostgreSQL generated columns *by example*, using concrete, practical scenarios to understand how they behave and when they are a good fit. We will look at how generated columns evolved across PostgreSQL versions, what problems they solve well, and where careful design is still required.To ground these examples in real-world usage, the talk uses Django as a concrete case study of how PostgreSQL generated columns are exposed and used through a widely adopted Python framework. This helps show how database features are actually used in production, and how ORM abstractions influence their adoption.The talk also reflects recent improvements in Django 6.0 that better align with PostgreSQL’s generated column behavior. Attendees will leave with a clear mental model, practical examples, and guidance on when to use virtual versus stored generated columns in real systems. | Paolo Melchiorre |

  • Format:
  • alt##LivestreamLivestream

Topic: Open Source

Language: English

Details

Jun

17

Wednesday

2026

POSETTE: An Event for Postgres 2026 – Livestream 3

3:00 PM - 9:00 PM (UTC)

| The Wonderful World of WAL | The Postgres write-ahead log, or WAL, is basically a change-log for the database. It enables several important Postgres features: crash recovery, point-in-time recovery, and binary and logical replication. This talk explains what is stored in the WAL, how binary and logical replication work, and how replication slots track replication progress.SLIDES AT https://momjian.us/main/writings/pgsql/wal.pdf | Bruce Momjian | | --------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------- | | Building Event-Driven Systems with PostgreSQL Logical Replication and Drasi | PostgreSQL's logical replication captures database changes in real-time, but most developers still rely on external streaming platforms like Kafka for event processing. This session shows you how to build event-driven architectures directly on PostgreSQL using its write-ahead log and Drasi, a CNCF Sandbox project that adds continuous queries and filtering on top of change data capture.You'll see a comparison of three CDC approaches: wal2json with custom consumers, Debezium with Kafka, and Drasi with PostgreSQL. I'll walk through live benchmarks measuring database overhead, end-to-end latency, and lines of code required for each approach. Using a working example, I'll demonstrate how PostgreSQL captures changes, how Drasi filters them with declarative queries, and how to trigger downstream actions—while monitoring PostgreSQL's actual CPU and network usage throughout.You'll learn when logical replication makes sense for your architecture, how to configure replication slots and publications, how to avoid WAL accumulation issues, and how to choose between different CDC approaches based on your requirements. This session focuses on practical PostgreSQL skills you can apply immediately, whether you're building on Azure, AWS, or on-premises. | Diaa Radwan | | pgcov: Bringing Real Test Coverage to PostgreSQL Code | We rely heavily on PostgreSQL functions, procedures, and SQL logic, yet we largely test them as black boxes. Tests may pass, but we rarely know what actually executed and what code paths remain untested.pgcov proposes a missing piece in the PostgreSQL tooling ecosystem: coverage analysis for SQL and PL/pgSQL, similar to what `go test -cover` or `pytest --cov` provides for application code.The idea is simple:- treat SQL as first-class source code,- run isolated tests against it,- instrument execution at the SQL/PLpgSQL level,- and produce actionable coverage reports.pgcov does not require PostgreSQL extensions, does not depend on psql, and is designed to integrate naturally into CI/CD pipelines. It complements existing testing tools like pgTAP by answering a different question:“Which parts of our database code are actually tested?”This talk explores the motivation, design approach, and how pgcov can significantly improve confidence in database-centric systems without changing how we write PostgreSQL code today. | Pavlo Golub | | Quorum-Based Consistency for Cluster Changes with CloudNativePG Operator | Most people don’t think of Postgres in the context of quorum or distributed systems theory but vanilla open source Postgres has supported quorum commits across multiple replicas for almost 10 years now. Technologies like cassandra and dynamo popularized quorum consistency in the hot path of distributed writes and reads, but the theory also applies to cluster reconfigurations in a single-writer database like Postgres. Stateful operators at level V of the capabilities framework require very careful end-to-end coordination between control plane and data plane algorithms to avoid data loss when providing auto-healing under circumstances like network partitions or compounded failures. This session will explore how quorum consistency can be applied in the CloudNativePG operator, offering insights to users of Postgres on Kubernetes about trusting Postgres to keep our data safe. | Jeremy Schneider & Leonardo Cecchi | | From Queries to Agents: The Next Era of Data Retrieval on PostgreSQL | As AI agents move from demos to production, the real challenge isn’t the model, it’s reliable, safe, and context‑aware data retrieval. In this talk, we explore how PostgreSQL is becoming the backbone for agent workflows through emerging retrieval patterns that begin with today’s Model Context Protocol (MCP) and point toward more unified approaches.We’ll break down how agents interact with Postgres today using MCP servers, what goes wrong when agents generate SQL blindly, and why retrieval increasingly requires robust context correction alongside blended retrieval, vector similarity, relational SQL, and graph‑aware traversal working together to give agents a complete and reliable view of the data.We’ll then outline the architectural principles shaping the next generation of retrieval layers—designed to give agents controlled, high‑quality access to Postgres without bespoke glue code.You’ll leave with a clear mental model for building Postgres‑backed agents today, and a practical roadmap for where agent retrieval at Microsoft is heading next. | Abe Omorogbe | | PostgreSQL 17 vs 18: Side‑by‑Side Performance Wins in Real‑World Queries | Simply upgrading PostgreSQL can make many everyday queries run faster, without any schema changes or application rewrites. In this talk, I’ll do a side‑by‑side comparison of Postgres 17 and  18 using common query patterns that developers and operators see in real systems. For each example, I’ll compare execution plans and runtimes to show where PostgreSQL 18 is faster and what planner or executor changes are responsible. | Divya Bhargov | | Production RAG at Scale with Azure Database for PostgreSQL | One of PostgreSQL's greatest strengths is its ability to serve as more than just a traditional database—it can be the foundation for intelligent systems. With the rise of AI-powered applications, PostgreSQL has emerged as a powerful platform for Retrieval-Augmented Generation (RAG) implementations. So, why should we consider PostgreSQL over specialized vector databases for production RAG systems?In this talk, we'll explore a complete production RAG architecture that powers Serenity Star's enterprise knowledge management platform. With features like pgvector for semantic search, DiskANN for high-performance indexing, and seamless integration with Microsoft Semantic Kernel, PostgreSQL proves itself as a robust foundation for intelligent knowledge systems processing millions of queries monthly.Drawing from our real-world experience as Azure customers scaling RAG from prototype to production, we will share practical insights on architecture decisions, performance optimization strategies, and monitoring approaches. We'll cover the complete pipeline: from document chunking and vector storage to semantic search optimization and production monitoring and observability. | Julia Schröder Langhaeuser & Paula Santamaría | | The Rise of PostgreSQL as the Everything Database | PostgreSQL is no longer just a transactional workhorse - it’s rapidly becoming the “everything database” for modern developers. Backed by a vibrant open-source community, Postgres is blurring the lines between OLTP, analytics, and vector storage, reducing the need for multiple specialized systems and the complexity that comes with them.According to the 𝟮𝟬𝟮𝟱 𝗦𝘁𝗮𝗰𝗸 𝗢𝘃𝗲𝗿𝗳𝗹𝗼𝘄 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 𝗦𝘂𝗿𝘃𝗲𝘆 (https://survey.stackoverflow.co/2025/technology#1-databases), PostgreSQL is now the #1 most used and most loved database, chosen by over 61% of professional developers. In this session, we’ll explore what’s driving that momentum - from native 𝘑𝘚𝘖𝘕 and 𝘵𝘪𝘮𝘦-𝘴𝘦𝘳𝘪𝘦𝘴 support to 𝘧𝘶𝘭𝘭-𝘵𝘦𝘹𝘵 𝘴𝘦𝘢𝘳𝘤𝘩, 𝘷𝘦𝘤𝘵𝘰𝘳 𝘦𝘮𝘣𝘦𝘥𝘥𝘪𝘯𝘨𝘴, and more, all built on Postgres’ extensible core.We’ll also highlight how 𝗔𝘇𝘂𝗿𝗲 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲 𝗳𝗼𝗿 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 helps developers scale with confidence in the cloud - offering enterprise-grade reliability, built-in AI, and fully managed infrastructure, all while staying true to familiar Postgres’ open-source roots.If you're building apps, reducing data sprawl, or just love open source, come see why PostgreSQL is now the everything database. | Varun Dhawan | | Postgres isn’t slow, your storage is | A lot of Postgres “scaling problems” look the same: low CPU, high latency, falling insert rates, and unpredictable behavior once the dataset gets big, in the TBs. In most cases we’ve investigated, the bottleneck wasn’t Postgres it was slow or inconsistent storage.In this talk we run the same workloads on networked block storage and on local NVMe and show what actually changes. Using perf, flamegraphs, and Postgres internal stats, we demonstrate how WAL fsync, buffer reads, and checkpoints dominate on slow IO, and how those costs largely disappear on NVMe. Insert-heavy time-series workloads scale linearly, OLTP latency stays stable under reporting queries, and performance becomes predictable.We also show where NVMe does not help. For wide scans and repeated aggregations over hundreds of millions of rows, Postgres still spends most of its time in executor and tuple-processing code. Fast storage moves the bottleneck to CPU, but it doesn’t turn a row store into a column store. | Sai Srirampur | | Why we built Azure HorizonDB for PostgreSQL | Modern Postgres workloads keep running into the same questions: How do you get consistent performance when traffic suddenly spikes? How do you keep latency predictable? How do you scale reads without bolting on more replicas than needed? What do failovers look like when you can’t afford long down times?This talk digs into why our Postgres team at Microsoft built Azure HorizonDB for PostgreSQL, a shared storage architecture designed to handle those pressures. We’ll walk through the core design decisions behind HorizonDB: what database as logs means, how storage and compute are separated, how scale‑out works, and more.If you are interested in which problems HorizonDB is meant to solve, what differentiates it from other managed PostgreSQL offerings in Azure, or how you can leverage HorizonDB for your workloads—then this session will help you make sense of it. | Dingding Lu | | Maintaining Large Tables in PostgreSQL | This talk focuses on the real problems large tables create at scale: autovacuum falling behind, bloat accumulating silently, planner misestimation, WAL explosions, and maintenance operations colliding with production traffic. Rather than treating these as isolated issues, we’ll examine them as symptoms of unsustainable data growth.Using PostgreSQL (with specific considerations for Azure Database for PostgreSQL), the session walks through practical strategies to sustain performance over time: per-table autovacuum tuning, bloat and statistics management, maintenance options, and observability guardrails.Finally, we’ll address the critical architectural questions: when is a large table no longer the right abstraction? We’ll compare options such as partitioning, hot/cold splits, rollup tables, sharding, and offloading analytical workloads. | Sarat Balijepalli |

  • Format:
  • alt##LivestreamLivestream

Topic: Open Source

Language: English

Details

Jun

18

Thursday

2026

POSETTE: An Event for Postgres 2026 – Livestream 4

6:00 AM - 12:00 PM (UTC)

Now in its 5th year, POSETTE: An Event for Postgres (pronounced /Pō-zet/) is a free and virtual developer event. The name POSETTE stands for Postgres Open Source Ecosystem Talks Training & Education. Happening Jun 16-18, 2026, join us for 4 unique livestreams to hear from open source users and experts in many aspects of the PostgreSQL ecosystem—including on Azure. Come learn what you can do with the world’s most advanced open source relational database—from the nerdy to the sublime. Come chat with POSETTE speakers & other community members on the #posetteconf channel in the Microsoft Open Source Discord before, during, and after the event. Full schedule & speakers for Livestream 1 below. In the meantime, you can catch up on last year’s talks at https://aka.ms/posette-playlist. Learn more about POSETTE 2026 at https://posetteconf.com/2026/ Livestream 4 Agenda |Session Title | Session Description | Speaker | | :---------------- | :---------------------- | :----------------------- | | My Postgres partitioning cookbook | Over the last three years, I have tried every single thing I could think of with Postgres partitions, and I made many mistakes. Most of them on my laptop; some lessons were a bit more painful. I've seen my fair share of performance problems and converted inheritance-based partitions to native and back. I've created many default partitions and by now managed to almost drop all of them.In this session, I want to go over all my lessons learned, especially the ones that surprised me the most. I will cover the basics about index and foreign key creations and implicit inheritance. And how the foreign keys caused catalog corruptions because of cleaning up data. I will discuss the good, the bad, and the ugly of the default partition and leave it to you to judge them as a blessing or a curse. An important part of working with Postgres partitions is cheating with the catalog. When I proposed this to my team the first time, everybody was nervous about it, as they should have been, but these days we do it without thinking. It is bad practice, but sometimes it is just the only way to get things done.This presentation will show you the basics, but also shows the pitfalls (mines) and obvious mistakes you can easily avoid. | Derk van Veen | | Exploring property graphs with SQL/PGQ in PostgreSQL | Using relational databases, how do you efficiently determine an optimal path covering all your tickets in the game of Ticket to Ride? Traditionally, solutions to graph-like problems in PostgreSQL—such as reachability and shortest-path discovery—have relied on Recursive Common Table Expressions (CTEs), which are often verbose, difficult to optimize, and complex to maintain. The SQL:2023 standard introduces SQL/PGQ (Property Graph Queries), finally allowing a relational database like PostgreSQL to treat graph traversals as first-class, declarative operations. And we are working to bring SQL/PGQ support to a future release of PostgreSQL, hopefully in PostgreSQL 19!In this session, you will explore property graph (SQL/PGQ) capabilities using the Ticket to Ride - like schema as a practical example. You will learn important aspects of SQL/PGQ: from defining a Property Graph over standard relational tables to using MATCH clause to implement pathfinding logic.Beyond the game, the session provides an under the hood look at the SQL/PGQ implementation in Postgres. The transformation of graph patterns into relational join trees and the complexities of path-variable bindings will be discussed. You will leave with a clear understanding of the Postgres implementation of the SQL/PGQ standard from one of the authors of the patch himself. | Ashutosh Bapat | | Journey of developing a performance optimization feature in PostgreSQL | In this talk, I will share the journey of identifying and optimizing a performance bottleneck in PostgreSQL. The session will walk through a systematic approach to diagnosing performance issues — distinguishing whether the bottleneck lies in the CPU, I/O, or network — and how iterative profiling and analysis can guide effective optimizations. Using perf and other diagnostic tools, we’ll examine how bottlenecks can shift during optimization, sometimes masking real gains. I will demonstrate how to effectively measure improvement in performance through careful tuning of the database, along with use of pgbench and custom benchmarking scripts tailored to the optimization under test.As a practical example, we will explore an optimization in PostgreSQL’s physical replication that enables the WAL sender to transmit WAL records to standbys before they are flushed to disk on the primary. This enhancement aims to reduce replication latency by leveraging WAL buffers to send data more proactively, minimizing disk reads and improving network utilization. For large transactions, this approach allows most WAL data to be sent in parallel with ongoing writes on master, aligning the flush operations on primary and standby more closely and significantly reducing replication lag. | Rahila Syed | | pg_duckdb in Action: Accelerating Analytics on Azure Database for PostgreSQL | If your analytics workflow starts with exporting data from Postgres, you’re not alone. Many teams build ETL pipelines just to answer questions about data that already lives in Postgres. But what if you didn’t have to move it at all? With pg_duckdb, you can run fast, columnar-style analytical queries inside Postgres without setting up a separate warehouse, a sync process, or another system to manage.In this talk, I’ll walk through how pg_duckdb works, what it’s good at, and where it fits into real-world workflows. The demo will showcase pg_duckdb installed on Azure Database for PostgreSQL, where I’ll run analytical queries over existing tables and query Parquet files directly from object storage—without loading them into Postgres first.It’s not a full replacement for a data warehouse, but in many cases, it’s a faster, simpler path to the answers your team needs—right from the database you already use. Attendees will learn how pg_duckdb works, how to use it effectively, and how to speed up analytics without extra tools or data movement. You’ll leave with practical tips and real examples you can apply right away. | Nitin Jadhav | | Modelling Postgres Performance Degradation on Burstable Cloud Instances | Many developers run Postgres on "burstable" cloud instances (like Azure B-series or AWS T-series) to optimise costs. While cost-effective, these instances operate on a CPU credit model that introduces non-linear performance risks.The danger is not a system crash, but throughput exhaustion. When CPU credits are depleted, the cloud provider throttles the CPU to its base frequency. Because Postgres is unaware of this external throttling, it continues to accept connections it can no longer process in a timely manner. This leads to a cascading failure, i.e. connection pools saturate, p99 latencies skyrocket, and the app layer eventually times out. The database will effectively be unavailable despite being "online."In this session, I will demonstrate how to model the exact saturation point of a throttled Postgres instance. I will show you a simple simulation method to calculate your "Base Performance Ceiling" without the need for expensive load-testing infrastructure, allowing you to right-size your database before the credits run out. | Chun Lin Goh | | Past, Present, and Future: Logical Decoding and Replication in PostgreSQL | Logical replication has evolved into a foundational capability for modern PostgreSQL deployments, enabling real-time data synchronization, partial replication. What began as a low-level decoding API in PostgreSQL 9.4 has now matured into a powerful feature, allowing for fine-grained control over what gets replicated and where.In this talk, we’ll trace the journey of logical decoding and replication in PostgreSQL, from its early adoption through extensions like pglogical, to the robust native features introduced in recent PostgreSQL releases. We’ll dive into how these capabilities have empowered change data capture (CDC), zero-downtime migrations, and real-time analytics pipelines.We’ll also explore how innovations in the ecosystem, particularly the work of Multi-master replication is shaping the future of distributed PostgreSQL. With features like out-of-box asynchronous logical replication, automated DDL propagation, and eliminating the traditional limitations of read-only replicas or single-writer systems.Key takeaways:- Understand the architecture and internals of logical decoding- Compare native and extension-based logical replication- Discover what's next: DDL replication, performance tuning, and multi-master replication | Hari Kiran | | From Dev to Prod: Securing Postgres the Right Way | Is your Postgres database really secure, or just “working”?  Why do security issues keep showing up after launch? Many teams rely on defaults until an incident proves otherwise.This session tackles common Postgres security blind spots developers face in real systems. We’ll walk through practical techniques to secure access, data, and operations without slowing delivery and enhance the security posture of your application.Key Takeaways:1) Least-privilege roles, schema isolation, Role design and permission boundaries2) Protecting data at rest and in transit3) Safe extension and function usage4) New Postgres enhancements around security and observabilityJoin me to turn security into a design habit, not an afterthought. Looking forward to engaging with you. Let’s make the Postgres world a little more fun and secure!! | Sakshi Nasha | | Vacuuming Enhancements in PostgreSQL 18: Faster, Smarter, More Predictable | PostgreSQL 18 delivers one of the most significant sets of VACUUM and ANALYZE improvements in years, making maintenance faster, more predictable, and easier to tune. This session highlights the key changes and explains how they automatically alleviate common pain points in real-world workloads, backed by field experience.We’ll cover:Asynchronous I/O (AIO): Overlaps reads with processing to reduce heap-scan delays and speed up VACUUM.Dynamic autovacuum scaling: Adjust autovacuum_max_workers on the fly without restarts for elastic maintenance.Earlier autovacuum triggers: Use autovacuum_vacuum_max_threshold and hard caps to prevent bloat on large tables.Eager freezing: Shortens future anti-wraparound VACUUMs automatically.Explicit tail shrinking: Reclaims unused space at the end of tables without blocking operations.Recursive VACUUM/ANALYZE: Handles inheritance-based partitions seamlessly and avoids redundant ANALYZE for declarative parents.Improved observability: Real-time cost-delay attribution (track_cost_delay_timing), per-backend I/O/WAL stats, and byte-level I/O metrics for proactive tuning. | Shashikant Shakya | | Move Less, Move Faster: Speeding Up Citus Cluster Scaling | Scaling a distributed Postgres cluster often isn’t limited by “adding a VM”, it’s limited by how long it takes to rebalance data safely. In this talk, I’ll give a minimal mental model of how Citus distributes data (shards, placements, and coordinator/worker roles), then explain why cluster scaling can feel painfully slow: data movement is expensive, and concurrency is constrained by safety and resource limits.We’ll then look at two concrete steps toward faster elastic scaling:Shard rebalancing improvements that increase parallelism and reduce bottlenecks.Snapshot-based node addition, where a new worker starts as a clone of an existing one, dramatically reducing how much data needs to be copied during rebalancing.Attendees will come away with a clearer way to reason about scaling time, plus actionable guidance for running scale-out/scale-in events safely. | Muhammad Usama | | LISTEN Carefully: How NOTIFY Can Trip Up Your Database | LISTEN/NOTIFY is a powerful and elegant PostgreSQL feature for asynchronous communication between backend components. It allows lightweight data transfer and instant notification updates without the need for a separate message bus.However, hidden within this simplicity and elegance is a surprising hazard: NOTIFY can cause unexpected statement and lock timeouts that seem to come out of nowhere. The reason for this is that each NOTIFY call obtains a cluster-wide exclusive lock to serialize notifications. Under high concurrency, seemingly innocuous code can end up causing performance bottlenecks and confusing backend errors—especially once traffic scales to levels difficult to replicate in development or on your laptop.In this talk, we'll walk through a real-world scenario involving a trigger using NOTIFY to alert other systems LISTENing for changes made to a high-traffic table. We'll do a deep dive into the problems caused, the investigation of the symptoms, and a solution for fixing the issue in production.You'll leave this talk equipped with an understanding of this wonderfully useful feature, along with its potential risks and what you can do to mitigate them. | Jimmy Angelakos | | Where Does My INSERT Go? A Logical Replication Story | What really happens to a single INSERT in PostgreSQL once it enters the system? In this talk, we trace the complete lifecycle of one tuple as it travels through PostgreSQL’s logical replication pipeline. Starting at the executor, we watch the tuple become a WAL record, explore what changes when wal_level=logical, and reveal how the logical decoding layer reconstructs row-level changes from low-level WAL fragments. We’ll step through the inner workings of the ReorderBuffer, explain how replication slots guarantee durability, and show how output plugins convert decoded changes into a logical stream ready for subscribers.On the receiving side, we follow the apply worker as it processes transactions, resolves ordering, handles conflicts, and replays the change into the subscriber database. By the end, you’ll have a clear mental model of each stage—from WAL generation to apply—and a deeper understanding of how PostgreSQL reliably moves data through logical replication.Whether you run logical replication, build CDC pipelines, or simply want to understand the internals behind one of PostgreSQL’s most powerful features, this talk will give you a guided, intuitive, and highly practical look behind the scenes. | Hamid Akhtar |

  • Format:
  • alt##LivestreamLivestream

Topic: Open Source

Language: English

Details

Speakers

Register for this series

Sign in with your Microsoft Account

Sign in

Or enter your email address to register

*

By registering for this event you agree to abide by the Microsoft Reactor Code of Conduct.