Cleo
CompanyPricing
Request a Demo
Anaelle GuezNaomie Halioua
Request a Demo
Cleo

AI-powered regulatory intelligence.

contact@cleolabs.co

Solutions

  • Due Diligence
  • Product Compliance

Company

  • About
  • Research
  • Blog

Jurisdictions

  • 🇪🇺 European Union
  • 🇫🇷 France
  • 🇩🇪 Germany
  • 🇬🇧 United Kingdom
  • 🇺🇸 United States

Legal

  • Privacy
  • Terms
  • Security

Events

  • VivaTech ParisJun 11–14, 2026

© 2026 Cleo Labs. All rights reserved.

GDPREU Data
Blog/AI
AI2026-03-17·9 min read
Naomie Halioua

Naomie Halioua

Co-founder & CRO, AI Research

TRISM: Why Agentic AI Doesn't Have a Trust Problem — It Has an Architecture Problem

Agentic AI Doesn't Have a Trust Problem — It Has an Architecture Problem

When a single AI makes a mistake, it's containable. When a network of AI agents makes a mistake — each one passing instructions to the next — the damage compounds before anyone notices. A landmark paper proposes the first framework to solve this at the architecture level.

TRISM: Trust, Risk and Security Management in Agentic AI Systems

Raza et al. · AI Open (Elsevier), 2026

Read the paper →66 citations in 3 months

The multi-agent trust gap

Most conversations about AI trust focus on a single model: is GPT-4 reliable? Does Claude hallucinate? But the frontier of AI deployment has moved far beyond single models. Enterprises are building multi-agent systems — pipelines where a dozen specialized AI agents collaborate, each handling one step of a complex workflow.

In compliance, this architecture is particularly powerful — and particularly dangerous. One agent identifies applicable regulations. Another maps obligations. A third scores risk. A fourth generates reports. When it works, you get consistent, explainable regulatory analysis in minutes. When it fails, each agent amplifies the previous one's errors, and by the time a human reviews the output, the mistake is buried six layers deep.

What TRISM proposes

The TRISM paper (Trust, Risk and Security Management) is the first serious attempt to build a unified governance framework for multi-agent AI systems. Published in AI Open (Elsevier) and already cited 66 times in under three months, it has clearly struck a nerve in the research community.

The core insight is deceptively simple: most AI governance frameworks fail because they treat trust, risk, and security as a single, undifferentiated problem. TRISM separates them into three distinct architectural layers, each requiring its own mechanisms.

Trust — Will this agent do what I expect?

Trust is about behavioral predictability. Can you verify that an agent's outputs match its stated purpose? TRISM proposes formal verification mechanisms, behavioral contracts between agents, and continuous trust scoring — each agent earns or loses trust based on output quality, not just initial calibration.

Risk — What happens if it doesn't?

Risk management in multi-agent systems isn't just about one agent failing — it's about cascade failures. If Agent 3 in a six-agent pipeline produces a flawed risk score, every downstream agent inherits and compounds the error. TRISM introduces inter-agent risk propagation models: quantifying how uncertainty compounds across the pipeline.

Security — Who can interfere from the outside?

Multi-agent systems introduce novel attack surfaces. Prompt injection on one agent can cascade across an entire pipeline. Data poisoning at the source level affects every downstream decision. TRISM defines security perimeters at the agent level, not just at the system boundary — treating each agent as its own security domain.

Why this matters for compliance teams

The implications of TRISM extend well beyond academic research. Whether you're a compliance officer, a product builder, or operating in a regulated sector, this framework changes what you can demand from your AI systems.

→

For DPOs and compliance officers

When an AI agent takes a compliance decision in your system, you need to be able to explain it to a regulator. "The AI decided" is not an acceptable answer under the AI Act, GDPR, or any serious regulatory framework. TRISM gives you the vocabulary — and the architecture — to decompose an automated decision into verifiable trust, risk, and security components. Each component becomes auditable independently.

→

For AI product builders

The AI Act holds you responsible for your system's outputs, even when those outputs come from a chain of agents you don't fully control. If you're building with LangChain, CrewAI, AutoGen, or any multi-agent framework, you need a governance layer that goes beyond prompt engineering. TRISM provides the blueprint: trust contracts between agents, risk propagation tracking, and security perimeters at each node.

→

For regulated sectors (healthcare, finance, energy, HR)

DORA and NIS2 both require you to demonstrate operational resilience in automated systems. As multi-agent AI moves from labs to production, regulators will ask how you manage trust propagation, error cascading, and adversarial resilience across your AI pipeline. Multi-agent trust frameworks like TRISM are how you prove it — before they ask.

The architecture lesson: trust is a design choice

The deepest insight from the TRISM paper isn't a technique — it's a perspective shift. Trust in AI systems is not something you add after deployment. It's not a monitoring dashboard. It's not an audit you run quarterly. Trust is an architectural property that must be designed into the system from day one.

This means separating trust (behavioral reliability), risk (failure propagation), and security (external interference) into distinct, independently auditable layers. When a regulator asks "why did your system make this decision?", you should be able to answer at each level: which agent made the determination, what was its trust score at the time, how was risk propagated from upstream agents, and what security controls prevented external manipulation.

From theory to practice: what this means for Cleo

At Cleo, we built our multi-agent pipeline with exactly this separation in mind. Each regulatory scan triggers 30+ specialized agents, and each agent operates within explicit trust boundaries. Trust, risk, and security are three distinct layers — not an afterthought bolted on top.

This is why our audit decisions are explainable, not just accurate. Every determination includes the regulatory source, the reasoning chain, the confidence score, and the risk propagation path. Compliance officers can validate at any point in the pipeline without having to reconstruct the logic from scratch.

The TRISM framework validates what we've been building: that the future of compliant AI isn't about trusting AI more — it's about building AI systems where trust is verifiable, risk is quantifiable, and security is structural.

Key takeaways

Multi-agent AI systems introduce compounding failure modes that single-model governance can't address.

Trust, risk, and security are three distinct problems — treating them as one is why most AI governance frameworks fail.

The AI Act, DORA, and NIS2 all require explainability and operational resilience — multi-agent trust architectures are how you deliver it.

Trust is an architectural property, not a post-deployment audit. Build it in from day one.

Frequently asked questions

What is the TRISM framework for AI?

TRISM (Trust, Risk and Security Management) is a governance framework for multi-agent AI systems published in AI Open (Elsevier) in 2026. It separates trust (behavioral predictability), risk (failure propagation), and security (external interference) into three distinct architectural layers, each requiring its own governance mechanisms. It was cited 66 times in its first three months.

Why is trust different from security in multi-agent AI?

Trust concerns whether an agent will behave as expected — it's about behavioral reliability and output quality. Security concerns whether external actors can interfere with the system — prompt injection, data poisoning, or unauthorized access. Conflating them leads to governance frameworks that check for attacks but don't verify behavioral consistency, or vice versa.

How does the AI Act apply to multi-agent AI systems?

The AI Act holds deployers responsible for their system's outputs regardless of the internal architecture. In a multi-agent pipeline, this means you're liable even if the error originated in an upstream agent you didn't directly configure. Frameworks like TRISM help by providing auditable trust, risk, and security layers that demonstrate due diligence at each step of the pipeline.

What is risk propagation in multi-agent AI systems?

Risk propagation describes how errors compound across a multi-agent pipeline. If Agent 3 in a six-agent chain produces a flawed output, every downstream agent inherits and amplifies the error. TRISM introduces inter-agent risk propagation models to quantify how uncertainty grows across the pipeline, enabling early detection before cascading failures reach the final output.

Sources & references

  1. Regulation (EU) 2024/1689 — Artificial Intelligence Act
  2. Regulation (EU) 2022/2554 — Digital Operational Resilience Act (DORA)
  3. Directive (EU) 2022/2555 — NIS2 Directive

Related resources

Solutions

AI-Powered Due Diligence

Guides

EU AI Act Compliance Guide

AI · 2026-02-22

Agentic AI for Regulatory Compliance: Why the Future of Compliance Is Autonomous

AI · 2026-03-10

Multi-Agent AI for Compliance: What 2026 Research Says

AI · 2026-02-13

Building Explainable AI for Compliance: Why Transparency Is Non-Negotiable

Try Cleo: free regulatory risk scan

See your regulatory landscape mapped in minutes. No signup, no credit card.

Scan for free
Book a Call
Anaelle GuezNaomie Halioua
Request a Demo