Talk

Virtual

FinOps for AI: Why GPU cost control breaks at scale

As AI adoption grows, many teams discover that traditional FinOps practices fail to control GPU spending. This session explores why AI workloads break conventional cost models and what platform teams can do about it.

CEST

Javier Abrego explores why conventional FinOps approaches struggle when applied to AI infrastructure and GPU-heavy workloads.

As organizations scale inference and training, cost visibility alone often fails to prevent runaway spending because of concurrency patterns, static reservations, and fragmented ownership models.

Attendees learn:
• Why traditional FinOps models break with AI workloads
• Hidden cost drivers in multi-tenant GPU environments
• The gap between allocation and real utilization
• Platform-level strategies to regain economic control

This session is aimed at platform and infrastructure teams supporting AI at scale.

Virtual

Register for PlatformCon 2026