Talk
Virtual
Building a hybrid AI platform with Amazon EKS for platform engineering teams
Learn how platform teams build a hybrid AI platform on Amazon EKS using Hybrid Nodes, Terraform, and NVIDIA GPUs, providing self-service, opinionated defaults, and reliable GPU access for ML teams across cloud and on-prem.
CEST
Meet the speakers
Platform teams are increasingly asked to support AI workloads that do not fit neatly into a single environment. GPU availability, data locality, and cost often push teams toward hybrid architectures that span cloud and on-premises infrastructure.
In this session, the speakers explore how platform engineering teams use Amazon EKS Hybrid Nodes and EKS Anywhere to build a consistent AI platform across environments. They show how infrastructure is provisioned with Terraform, NVIDIA GPUs are integrated, and hybrid complexity is abstracted behind a self-service interface for application and ML teams.
The talk focuses on real platform concerns: cluster lifecycle, GPU scheduling, networking, security boundaries, and clear ownership between platform and application teams. Attendees will leave with practical patterns, trade-offs, and lessons learned from treating the platform as a product, not just infrastructure.
