Hands-on workshop

Virtual

Sandboxing AI agents: Container isolation strategies for safe execution

In this hands-on workshop we will look into the critical and overlooked challenge of AI Agent sandboxing. We will start by looking at a real-world example of running an AI Agent without isolation, and move the AI Agent into a container and determine if this acts as a security boundary.

Jun 23, 2026

11:00

CEST

Meet the speakers

AI agents are used for many tasks today, from writing code and reviewing pull requests to interacting with APIs, building frameworks, and drafting blog posts and workshop abstracts. But what happens if an AI agent is compromised, whether through prompt injection, a misunderstanding, or deleting a file? This hands-on workshop examines the critical and often overlooked challenge of AI agent sandboxing. It explores a real-world example of running an AI agent without isolation, moving it into a container, and determining whether that container acts as a security boundary. It then examines the container runtime, including runc and Styrojail.

By the end of this workshop, attendees will be able to:
• Articulate the security risks of running AI agents without sandboxing.
• Explain why containers alone are not a security boundary.
• Determine whether a container environment is isolated.
• Distinguish between container runtimes and understand their security properties.

Virtual

Register for PlatformCon 2026