Talk

Virtual

Enterprise AI guardrails: Preventing silent interpretation errors at scale

This talk shows how AI-assisted SDLC tools silently make interpretive decisions that lead to biased or incorrect outcomes, and how platform-level guardrails can prevent entire classes of failure across the enterprise.

CEST

Maebh Booth traces bias discovered while building an AI-assisted genealogy project, revealing a pattern: AI systems make silent interpretive decisions about scope, relevance, defaults, and ambiguity before generating code or analysis. When wrong, outputs can look correct yet embed bias or functional errors that no one thought to check for.

Viewed through an enterprise lens, this is a platform problem: without shared guardrails in AI-assisted SDLC tools, the same hidden assumptions repeat across teams, driving rework and production risk at scale.

Maebh presents a guardrail workflow that forces early course correction. Attendees will learn:

• Rule language and triggers that map to these cognitive operations
• A phase-based "pause and clarify" workflow and common rollout pitfalls

Virtual

Register for PlatformCon 2026