Talk
Virtual
AI agents are just microservices with worse tooling
Your AI agents need the same patterns as distributed services: structured contracts and fault isolation. Agent tooling is only in its infancy. Here's how we shipped production agents by treating them like microservices, not magic.
CEST
Meet the speakers
In this talk, Tyler Jang shares what Trunk learned while shipping an AI agent that fixes flaky tests. The thesis is that agents are not a new paradigm; they are distributed services, and platform engineers already know how to build those.
Key takeaways:
• Multi-agents beat mega-agents: smaller black boxes mean a smaller blast radius when models drift, with easier mocking and targeted evals.
• LLM-specific observability through classic instrumentation: daily trace review makes or breaks a team.
• Traditional SWE still wins: retries, structured contracts, and embedding clustering instead of prompts. The LLM should not do everything.
• What not to do: homebrewed audit logs, LLM-generated prompts, and letting the model run its iteration loop unchecked.
Attendees leave with patterns that work today, not hype.