What If We’ve Been Scaling Stream Processing Wrong All Along
Session Abstract
We’ve normalised extraordinary inefficiency in stream processing. Thousands of events/sec don’t justify repartition storms, serialization overhead, state migration. This talk explores a different path: Kafka Streams DSL, adopt Flink-like exactly-once semantics, Project Loom, and challenging the assumption that stream processing must be distributed.
Session Description
Your Kafka Streams application just rebalanced. Again. Your Flink checkpoint is timing out. Again.
Here’s an uncomfortable truth: most stream processing applications don’t operate at Uber scale. They handle thousands of events per second—complex joins, stateful aggregations, valid use cases – but nowhere near the volumes that justify the operational complexity we’ve accepted as normal.
Yet we pay the full distributed systems tax anyway. Repartition topics doubling network I/O and storage. Repeated serialization burning CPU cycles, often accounting for a significant amount of the total compute of an application. Standby replicas sitting idle. State migration or restoration during deployments. And the human cost: specialized expertise that takes years to develop, expert teams that are expensive to build and painful to lose.
We’ve normalized extraordinary inefficiency in the name of horizontal scalability that many applications will never need.
But rethinking stream processing in 2026 doesn’t mean “just use Postgres.”
In this talk, I’ll share an early-stage exploration of a different approach. A framework that preserves the Kafka Streams DSL, borrows Flink’s approach to exactly-once semantics, leverages Project Loom for high concurrency—and challenges a fundamental assumption that both frameworks share.
This is an invitation to question conventional wisdom and explore what stream processing could look like when we stop distributing by default.