Talk
Virtual
Autonomous but accountable: Securing AI agents with the human in the circuit
As AI moves to autonomous agents, the risk surface explodes. Learn a pragmatic Security-as-Code framework to safely govern AI platforms while keeping humans in the loop for high-risk actions.
CEST
Meet the speakers
The transition from simple prompts to autonomous agents that can initiate transactions represents a significant leap in productivity and enterprise risk. In a regulated ecosystem, an AI agent’s capability list is a direct map of organizational vulnerability. To scale safely, organizations must move beyond manual gatekeeping to an architecture where security is automated but humans remain "in the circuit."
Gaurav shares a pragmatic "Security as Code" framework designed to safely enable AI platforms without suffocating innovation. He explains how AI can assist in the review process by providing automated context for manual overrides, ensuring that as agents become more reliable, human oversight remains focused on the most critical, high-stakes decisions.
Key takeaways:
• Translate AI agent capabilities and tool access into actionable risk language.
• Use AI to provide context for manual reviews, making them faster and more precise.
• Balance agent autonomy with human-in-the-loop oversight for critical actions.
• Automate security gates within CI/CD pipelines to manage agent entitlements.
