Talk
Virtual
Building guardrails for LLMs
A real-world case study on building SentinelGuard, a production-ready LLM security framework with 32 scanners, PII protection, adversarial defense, embedding guardrails, and the engineering trade-offs behind making GenAI systems safe at scale.
CEST
Meet the speakers
LLMs are powerful, but productionizing them is a security and reliability minefield.
Prompt injection. PII leakage. Adversarial manipulation. Biased outputs. Malicious URLs. System prompt exposure. Most teams discover these risks only after something breaks in production.
In this talk, the speakers share the engineering journey behind SentinelGuard, a comprehensive LLM guardrails framework built to address these challenges in real-world systems. They explore how they designed a modular scanning architecture with 32 security scanners, integrated enterprise-grade PII detection using Presidio, implemented adversarial attack detection using statistical and embedding-based methods, and enforced semantic topic guardrails with vector embeddings.
They unpack the real trade-offs engineering leaders face:
• How strict is too strict?
• How do teams prevent over-blocking while maintaining safety?
• How do teams design guardrails that scale across microservices?
• Where should guardrails be placed in distributed architectures?
• How is effectiveness measured?
They also examine architectural patterns, fail-fast versus layered validation strategies, async performance considerations, and API integration approaches.
Attendees will leave with a practical blueprint for implementing LLM security in production.
