Talk

Virtual

Securing AI agents in production: A zero-trust approach to LLM integration

This talk explores how AI agents can securely operate in production environments, enabling LLM-driven workflows inside internal systems without compromising zero-trust principles.

CEST

Running AI agents in production introduces new security challenges, especially when they interact with internal systems and sensitive data. In this talk, Karan shares how a fintech platform deployed Google ADK agents on private GKE clusters while preserving strict zero-trust boundaries.

The session walks through the real-world architecture decisions behind:
• Private Service Connect for controlled access to Vertex AI Agent Engine
• Workload Identity to remove long-lived service account keys
• Network policies and firewall rules to isolate agent workloads

Attendees will leave with practical design patterns and trade-offs for securely integrating LLM-powered agents into production platforms.

Virtual

Register for PlatformCon 2026