Talk
Virtual
Run it before you read it
How should teams review AI-generated code? They shouldn't. Not first, anyway. We've adopted a method of making AI prove code works in production-like environments before humans review it. We'll show you why and how to implement the same.
CEST
Meet the speakers
AI agents are writing code faster than any team can review it. The instinct is to review faster. In this talk, Natan Yellin argues that this solves the wrong problem.
The real shift is making AI prove its code works before a human ever looks at it. Natan shares how his team rethought CI/CD so that AI-generated code runs against isolated sandbox environments before review. When a reviewer finally opens the PR, they do not just see a diff. They see screenshots, execution results, system behavior, and real output. Everything they need to validate that the code works as expected.
In this talk, attendees learn how to apply similar approaches to their own CI/CD pipelines and about surprising gotchas, such as why mock data is often insufficient and how to build proper isolated environments for testing AI-generated code.