Talk
Virtual
How Cilium became the de facto CNI for cloud-native workloads
Cilium has gone from an ambitious idea to the default networking and security layer in many Kubernetes platforms. This talk traces the technical bets behind that rise, why they mattered to platform teams, and how the project earned trust at scale.
CEST
Meet the speakers
Kubernetes networking started as plumbing: get pods talking and move on. Then reality hit: shared clusters, compliance requirements, multi-cluster sprawl, noisy neighbors, and the constant need to understand what traffic is doing when things go wrong. Platform teams needed networking that was fast, observable, and consistent across environments, without stitching together a long chain of separate tools.
This session tells the story of how Cilium became the de facto CNI for cloud-native workloads by focusing on a few hard problems and solving them in a way platform engineers could operationalize. It covers the shift from simple IP-based rules to identity-aware policy, why deep visibility became a first-class requirement, and how an eBPF-based datapath changed what was realistic in terms of performance and control. It concludes with what this enables next, including more consistent security controls, better multi-cluster operations, and a cleaner path for bringing virtual machines and modern workloads under the same platform model.
