Talk

Virtual

Hard-won lessons after processing 6.7T events through PostgreSQL queues

This talk chronicles six years of battle-tested lessons learned while scaling PostgreSQL from a simple queue to a system processing 100,000 events per second, delivering a total of 6.7T events.

CEST

Many organizations reach for specialized streaming systems like Apache Kafka for high-throughput event processing, but is it always the best choice? This talk chronicles six years of lessons learned while scaling PostgreSQL from a simple queue to a system processing 100,000 events per second and delivering 6.7T total events. It covers the configuration values, query patterns, and architectural decisions that enabled PostgreSQL to compete with, and often outperform, dedicated messaging systems while providing operational simplicity and transactional guarantees. This is production-tested guidance from a system that processes billions of events monthly.

Virtual

Register for PlatformCon 2026