From LLM Calls to Governed Agent Systems: Building a Runtime for Auditability, Replay, and Control
Date5 maiHeure17:10 - 17:30Lieu Scène Junior
As AI systems evolve from isolated LLM calls into autonomous agents, a new class of engineering problems emerges: lack of control, limited observability, and no reliable way to audit or reproduce decisions.
Most current approaches rely on prompts, logs, or ad hoc guardrails, which are insufficient once agents begin to operate across multiple tools, states, and long-running workflows.
In this talk, we introduce a governance-oriented runtime for AI agents. The system provides policy enforcement, trace-based auditability, deterministic replay, and evaluation hooks for non-regression. By treating agent execution as a structured, observable, and controllable process, we enable developers to move from “best-effort prompting” to “engineered agent systems.”
We will walk through the architecture, core components, and practical design patterns, and show how governance can be integrated into real-world AI workflows without sacrificing flexibility.
Attendees will learn how to design AI systems that are not only powerful, but also auditable, reproducible, and production-ready.
This talk is structured in four parts:
1. Problem framing: why current LLM-based systems fail in production settings (lack of control, explainability, and reproducibility)
2. System design: introducing a governance runtime layer for AI agents
- policy enforcement
- trace as an audit layer
- replay and deterministic execution
- evaluation and non-regression hooks
3. Architecture walkthrough: how the runtime integrates with existing agent/tooling ecosystems
4. Case examples: applying governance to real agent workflows and user-facing decision systems
The session focuses on practical engineering patterns rather than high-level concepts, and is intended for developers building or operating AI-driven systems.