Skip to main content

LEARN, CONNECT, BUILD

Microsoft Reactor

Join Microsoft Reactor and engage with developers live

Ready to get started with AI and the latest technologies? Microsoft Reactor provides events, training, and community resources to help developers, entrepreneurs and startups build on AI technology and more. Join us!

LEARN, CONNECT, BUILD

Microsoft Reactor

Join Microsoft Reactor and engage with developers live

Ready to get started with AI and the latest technologies? Microsoft Reactor provides events, training, and community resources to help developers, entrepreneurs and startups build on AI technology and more. Join us!

Go back

POSETTE: An Event for Postgres 2026 – Livestream 2

17 June, 2026 | 6:00 AM - 12:00 PM (UTC) Coordinated Universal Time

  • Format:
  • alt##LivestreamLivestream

Topic: Open Source

Language: English

Now in its 5th year, POSETTE: An Event for Postgres (pronounced /Pō-zet/) is a free and virtual developer event. The name POSETTE stands for Postgres Open Source Ecosystem Talks Training & Education. Happening Jun 16-18, 2026, join us for 4 unique livestreams to hear from open source users and experts in many aspects of the PostgreSQL ecosystem—including on Azure. Come learn what you can do with the world’s most advanced open source relational database—from the nerdy to the sublime. Come chat with POSETTE speakers & other community members on the #posetteconf channel in the Microsoft Open Source Discord before, during, and after the event. Full schedule & speakers for Livestream 1 below. In the meantime, you can catch up on last year’s talks at https://aka.ms/posette-playlist.

Learn more about POSETTE 2026 at https://posetteconf.com/2026/


Livestream 2 Agenda
--

Session Title Session Description Speaker
Postgres 19 Hackers Panel: What’s In, What’s Out, & What’s Next What happens in the lead-up to a Postgres feature freeze, and why do some patches make the cut while others stall? Join four of the project’s open source contributors—Álvaro Herrera, Heikki Linnakangas, Melanie Plageman, and Thomas Munro—for a conversation about Postgres 19. With a combined ~75 years of experience hacking on Postgres, this panel will talk about some of the successful collaborations that pushed key features into Postgres 19, as well as the "missed the boat" list: features they wished had made the PG19 freeze (but are still in the works for the future.)

You’ll also get an early look at big-ticket items being engineered for Postgres 20 and beyond (yes, including multithreading.) From the technical hurdles of working on Postgres to the personal reasons that keep these hackers coming back to this database, you’ll get a peek into what’s coming next with Postgres.

Note: At the time of the recording of this keynote panel in April 2026, Postgres 19 will have just reached feature freeze and is expected to reach GA in the Sep/Oct 2026 timeframe.
Álvaro Herrera, Heikki Linnakangas, Melanie Plageman, Thomas Munro
pg_lake: Postgres as a lakehouse When Postgres is bad at something, we can make it good at it through extensions. Postgres is not a good analytics database. Its analytical query performance is relatively, it has no facilities for interacting with object storage, and only supports basic CSV as a file format.

Pg_lake is a set of open source Postgres extensions that add the ability to query/import/export raw data files in your data lake via simple SQL commands commands, and create and manage Iceberg tables with high analytical query performance. It enables you to use Postgres as a versatile data "lakehouse".

This talk describes how pg_lake extends Postgres and introduces a new query engine (by "de-embedding" DuckDB), a new table storage engine (Iceberg), and seamlessly integrates them with all existing Postgres features and transactions in a production-ready way. We also show various new patterns that have emerged for using pg_lake, and how it combines with the pg_incremental extension.
Marco Slot
Migrating VLDBs from Oracle to Azure Database for PostgreSQL Migrating very large databases (VLDBs) to PostgreSQL becomes significantly more complex when the target is a managed cloud service. This session presents proven, field‑tested strategies for migrating multi‑terabyte Oracle workloads to Azure Database for PostgreSQL – Flexible Server with minimal downtime and predictable performance.

We’ll cover the full migration lifecycle: validating schema compatibility, planning WAL throughput and storage layout, optimizing network and bulk‑load operations, and using logical replication to achieve near‑zero‑downtime cutovers. You’ll learn practical techniques for handling partitions, large objects, and long‑running transactions at scale, along with methods to avoid common VLDB pitfalls such as bloat, autovacuum stalls, slow COPY performance, and resource throttling caused by misaligned compute or IOPS.

Real customer examples will highlight what works, what to avoid, and how to design stable, high‑performance deployments from day one. You will leave with a repeatable VLDB migration checklist and tuning templates ready for immediate use.
Adithya Kumaranchath
Taming Unpredictable PostgreSQL Workloads with Azure HorizonDB Running PostgreSQL today often means choosing between overspending on idle capacity or risking performance dips when traffic suddenly spikes. Teams face unpredictable workloads—burst‑heavy APIs, event‑driven pipelines, seasonal peaks, analytics surges, and multi‑tenant SaaS patterns that don’t respect fixed sizing. Traditional provisioning forces a compromise: pay for headroom you rarely use, or manually resize under pressure when demand changes.
This session looks at a more adaptive PostgreSQL consumption approach where compute aligns with workload intensity instead of static guesses. Sudden surges sustain performance without manual resizing, while quiet periods avoid burning budget on idle resources. The goal: reduce operational friction, improve predictability under load, and simplify day‑to‑day management—without changing applications or re‑architecting your environment.
Attendees will leave with a clear mental model of why variability is so hard to plan for, how consumption‑aligned compute mitigates those challenges, and what patterns make PostgreSQL deployments more resilient, cost‑efficient, and easier to operate.
Silvano Coriani
The Hitchhiker’s Guide to PostgreSQL Hacking: Don’t Panic, Just Start Small Hacking on PostgreSQL can feel overwhelming: a massive codebase, a rigorous review culture, and a patch queue that never seems to shrink. Many aspiring contributors ask the same questions: Where do I begin? What should I work on?

This talk offers a practical roadmap for entering PostgreSQL development. Rather than starting with large features or ambitious rewrites, we focus on a disciplined approach: reviewing patches, fixing small bugs, testing edge cases, and building intuition for the codebase.

We explore how small improvements—clarifying a review comment, or isolating a bug—compound into deeper understanding and meaningful contributions. We will also discuss the psychological side of hacking: navigating imposter syndrome, learning from reviews, and turning feedback into momentum.

PostgreSQL is not conquered in a single patch. It is learned incrementally. This talk demonstrates how sustained, focused effort transforms confusion into contribution.

What Attendees Will Learn
• How to choose a first patch
• How patch review builds architectural understanding
• How small changes lead to larger infrastructure work
• How to navigate PostgreSQL’s review culture effectively
• How to turn feedback into growth instead of frustration
Xuneng Zhou
From trust to Tokens: A Short History of PostgreSQL Authentication PostgreSQL offers a surprisingly large number of authentication methods—but most users only encounter one or two of them, often without understanding why they exist.

In this short talk, we take a fast, story driven tour through the evolution of PostgreSQL authentication. Starting with early Unix centric assumptions (trust, ident, peer), we move through password authentication, enterprise integrations like LDAP and Kerberos, and end with modern identity driven approaches such as certificate  and token based authentication.

Rather than listing every option, this talk focuses on key inflection points: what problem PostgreSQL was solving at each stage, what trade offs were made, and how those decisions still affect real world deployments today.

Attendees will leave with a clear mental model of PostgreSQL authentication—enough to choose wisely, avoid common mistakes, and understand where the ecosystem is heading.
Murat Tuncer
What's new with constraints in Postgres 18 PostgreSQL 18 introduces temporal keys, NOT ENFORCED constraints and promotes NOT NULL to a first-class constraint. Constraint handling for partitioned tables has also improved. In this session, we’ll walk through these changes with practical examples and finish with a sneak peek at what’s coming in PostgreSQL 19. Gülçin Yıldırım Jelínek
PostgreSQL queues done right with PgQ Modern applications often rely on message queues - for  background jobs, data pipelines, notifications, and event-driven architectures. Using something external like Kafka, Redis, RabbitMQ, etc increases operational complexity and introduces new failure modes. It all could be avoided by keeping a message queue in a database.

Quick research on the internet shows that developers commonly are trying to engineer the database queue based on SELECT … FOR UPDATE SKIP LOCKED (available since 9.5). This approach works reasonably well under small load, and spectacularly falls apart if subscribers can’t keep up with publishing rate. PostgreSQL can do better - and in fact, it already did. PgQ is PostgreSQL extension that provides generic, high-performance lockless queue with simple SQL.

In this talk, we start with why common SELECT … FOR UPDATE SKIP LOCKED approaches fall apart under load, and how PgQ quietly solved those problems a couple decades ago. Then we take a deep look at PgQ internals: snapshot-based event reads, transaction-ordered delivery, and how PgQ gets away with just a single index to achieve high throughput and consistency. Finally, we will discuss practical patterns for running PgQ on managed PostgreSQL services where this extension is typically not available.
Alexander Kukushkin
Building safety tooling for risk-free AI tuning of Postgres: Fast cars need fast brakes Optimizing your database with AI is a tantalizing prospect, but how can we make sure to do this in a risk-free manner? In this talk, I will share my experience with building safeguards and guardrails for automated PostgreSQL tuning to help you sleep well at night — the better the safety net, the more freely we can let the agent work to improve the system.

I will walk through the tried and tested safety patterns: memory and performance monitoring, and validation techniques that ensure every change is safe. The goal is simple — get the performance gains you want while minimizing risk to your system. Whether you are considering automated tuning or building your own tools, building for safety should always be the highest priority.
Mohsin Ejaz
Logical Decoding Protocol V2: Streaming Transactions, Schema Changes, Backfills and more My talk will examine the practical details of implementing a logical decoding consumer, drawing from my experience in building a client in Go using pglogrepl and from studying production CDC systems like PeerDB and Dolt. Since there are many talks, blog posts and docs that provide overviews, I want to specifically cover the non-obvious and undocumented tidbits on logical decoding that I encountered. Specifically:

- Protocol V2 streaming semantics: handling interleaved transaction chunks, the StreamStart/StreamStop/StreamCommit message sequence, and why duplicate StreamAbort messages occur
- TOAST column handling: the 'u' (unchanged) tuple type, backfill strategies, and the REPLICA IDENTITY tradeoffs different systems make
- DDL & Schema change detection: reactive vs proactive, using RelationMessage deltas rather than DDL parsing, and how PeerDB propagates column additions to destinations
- Initial data sync: leveraging the exported snapshot from slot creation for consistent backfills without race conditions
- And more...

The talk assumes some familiarity with PG replication concepts and is aimed at developers building or maintaining CDC pipelines.

I have been writing about PostgreSQL on my blog for several years, this will be my first conference talk.
Brandon Mochama
PostgreSQL Generated Columns by Example PostgreSQL generated columns are a powerful feature, and recent releases have significantly expanded what they can do. With PostgreSQL 18, generated columns are now virtual by default, while still allowing stored behavior, introducing new trade-offs around performance, storage, and query behavior.

In this talk, we explore PostgreSQL generated columns *by example*, using concrete, practical scenarios to understand how they behave and when they are a good fit. We will look at how generated columns evolved across PostgreSQL versions, what problems they solve well, and where careful design is still required.

To ground these examples in real-world usage, the talk uses Django as a concrete case study of how PostgreSQL generated columns are exposed and used through a widely adopted Python framework. This helps show how database features are actually used in production, and how ORM abstractions influence their adoption.

The talk also reflects recent improvements in Django 6.0 that better align with PostgreSQL’s generated column behavior. Attendees will leave with a clear mental model, practical examples, and guidance on when to use virtual versus stored generated columns in real systems.
Paolo Melchiorre

Speakers

Already registered and need to cancel? Cancel registration

Registration

Sign in with your Microsoft Account

Sign in

Or enter your email address to register

*

By registering for this event you agree to abide by the Microsoft Reactor Code of Conduct.