Back to Insights
DORAFinancial ServicesICT RiskAI AgentsCISO

DORA for CISOs: ICT Risk Management When AI Agents Run Your Operations

February 19, 2026 · 8 min read

The Digital Operational Resilience Act has been enforceable since January 2025. For over a year now, every bank, insurer, asset manager, and payment provider in the EU has been required to maintain a comprehensive ICT risk management framework — one that BaFin actively supervises. That alone would keep any CISO busy. But here’s the twist: since DORA was drafted, AI agents have moved from research labs into production environments. They’re processing loan applications, running compliance checks, triaging customer service requests, and executing trades. Your ICT risk framework needs to account for autonomous systems that weren’t on anyone’s radar when this regulation was written.

This guide bridges that gap.

DORA
Live since Jan 2025
BaFin
National supervisor
5
Core pillars
72h
Major incident reporting

The Five DORA Pillars — Now With AI Agents in the Mix

DORA rests on five interconnected pillars. Each one takes on new dimensions when AI agents are part of your operations.

1
ICT Risk Management Framework
Identify, protect, detect, respond, recover — across all ICT assets including AI agents and their dependencies.
2
ICT Incident Reporting
Classify and report major ICT-related incidents to BaFin within 72 hours, including AI-driven failures.
3
Digital Operational Resilience Testing
Regular testing including threat-led penetration testing (TLPT) — now covering AI agent attack surfaces.
4
Third-Party Risk Management
Contractual and oversight requirements for all ICT service providers, including AI/ML vendors and model providers.
5
Voluntary sharing of cyber threat intelligence across financial entities to strengthen collective resilience.
Information Sharing

Pillar 1: ICT Risk Management When Agents Act Autonomously

Articles 5–16 of DORA require financial entities to establish a comprehensive ICT risk management framework, approved and overseen by the management body. The framework must cover identification, protection, detection, response, and recovery.

What changes with AI agents:

Traditional ICT risk management assumes deterministic systems. You deploy software, it behaves as coded. AI agents introduce probabilistic behaviour — the same input may produce different outputs. They learn, adapt, and in agentic configurations, they chain decisions together without human intervention.

Your ICT risk framework must now address:

  • Asset inventory expansion. Every AI agent is an ICT asset. Document its purpose, data access, decision authority, model version, and dependencies. An agent that auto-approves insurance claims under €5,000 has a fundamentally different risk profile than one that drafts internal reports.
  • Decision boundary mapping. Define and enforce what each agent can and cannot do. DORA requires proportionate controls — an agent with access to customer PII and transaction authority needs controls commensurate with that access.
  • Model drift and behavioural monitoring. Unlike traditional software, AI agents can degrade silently. Build continuous monitoring for output quality, decision distribution, and anomaly detection. This is your “detect” function adapted for AI.
  • Human-in-the-loop requirements. For critical functions (as defined under DORA Article 3), establish mandatory human review thresholds. Not every decision needs human approval, but your framework must define which ones do.

BaFin has signalled through its 2025 supervisory priorities that it expects institutions to treat AI systems as critical ICT components where they touch core business processes. If your AI agent processes KYC checks or credit scoring, expect scrutiny.

Pillar 2: Incident Reporting — When the Agent Is the Incident

DORA’s incident reporting requirements (Articles 17–23) mandate classification and reporting of major ICT-related incidents. The initial notification must reach BaFin within 72 hours, followed by intermediate and final reports.

AI agents create new incident categories:

  • Autonomous cascading failures. An agent making decisions in a loop can amplify errors faster than any human operator. A miscalibrated trading agent doesn’t just make one bad trade — it makes thousands before anyone notices.
  • Data poisoning and model manipulation. If an adversary compromises the training data or input pipeline of an agent handling AML screening, the resulting failures are ICT incidents under DORA — even if no traditional “system” went down.
  • Hallucination-driven compliance breaches. An AI agent that provides incorrect regulatory information to customers or makes decisions based on fabricated data creates incidents that are both operational and reputational.

Practical steps:

Map your AI agent failure modes to DORA’s incident classification criteria (impact on clients, financial impact, data breach, service disruption). Pre-draft incident report templates that capture AI-specific details: model version, input data characteristics, decision chain reconstruction, and containment actions taken.

Pillar 3: Resilience Testing for AI-Augmented Operations

Articles 24–27 require regular digital operational resilience testing, including advanced threat-led penetration testing (TLPT) for significant institutions.

Your testing programme must now include:

  • Adversarial AI testing. Test agents against prompt injection, data poisoning, model extraction, and evasion attacks. Standard penetration testing won’t cover these vectors.
  • Failure mode testing. What happens when the agent’s API provider goes down? When latency spikes? When input data is corrupted? DORA requires you to know — and to have tested recovery.
  • Scenario-based stress testing. Simulate conditions where multiple AI agents interact unexpectedly, where model outputs degrade gradually, or where an agent operates outside its training distribution.

For institutions subject to TLPT under the TIBER-EU framework (which BaFin implements as TIBER-DE), AI agent infrastructure should be in scope. The threat intelligence phase should specifically assess AI-specific threat actors and techniques.

Pillar 4: Third-Party Risk — Your AI Vendor Is Now a Critical ICT Provider

This is where DORA bites hardest for AI-adopting institutions. Articles 28–44 establish rigorous requirements for managing ICT third-party risk, including a Union-level oversight framework for critical ICT third-party service providers.

The AI vendor landscape creates concentrated risk:

Most financial institutions don’t build foundation models in-house. They rely on a handful of providers — OpenAI, Anthropic, Google, or open-source models hosted by cloud providers. This creates exactly the concentration risk DORA was designed to address.

Your third-party risk management must cover:

  • Contractual requirements (Article 30). Ensure contracts with AI vendors include: data location and processing guarantees, audit rights, incident notification obligations, exit strategies, and subcontracting transparency. Most standard AI API terms of service don’t meet DORA requirements out of the box.
  • Exit strategies. What happens if your AI provider is designated as critical and subjected to Union-level oversight? What if they change pricing, terms, or model behaviour? DORA requires documented, tested exit strategies.
  • Concentration risk assessment. If three of your critical AI agents all depend on the same foundation model provider, you have a single point of failure. Document it, assess it, mitigate it.
  • Sub-outsourcing chains. Your AI vendor likely relies on cloud infrastructure, data providers, and possibly other model providers. Map the full chain. DORA requires it.

NIS2 overlap: If your AI vendor qualifies as a provider of digital infrastructure or ICT services under NIS2 (Directive 2022/2555), they face their own cybersecurity obligations. This doesn’t reduce your DORA responsibilities, but it provides additional assurance — and additional leverage in contract negotiations. Germany’s NIS2 implementation through the NIS2UmsuCG adds national specifics worth tracking.

Pillar 5: Information Sharing on AI-Specific Threats

Article 45 encourages (but doesn’t mandate) information sharing on cyber threats. For AI-specific risks, this pillar becomes particularly valuable.

AI attack techniques evolve rapidly. Prompt injection methods that work today may be patched tomorrow and replaced by new variants. Sharing threat intelligence on AI-specific attacks — adversarial inputs, model vulnerabilities, novel exploitation techniques — benefits the entire sector.

Consider joining or establishing AI-focused threat intelligence sharing groups within existing ISACs (Information Sharing and Analysis Centres) for the financial sector.

Traditional ICT Risk vs. AI-Augmented ICT Risk

Risk Dimension
Traditional ICT
AI-Augmented ICT
System behaviour
Deterministic, predictable
Probabilistic, context-dependent
Failure modes
Binary — works or doesn't
Gradual degradation, silent drift
Attack surface
Network, application, infrastructure
Plus: training data, prompts, model weights, decision chains
Vendor concentration
Distributed across many providers
Few foundation model providers dominate
Change management
Versioned releases, tested deployments
Model updates may alter behaviour without code changes
Incident detection
Logs, alerts, monitoring
Plus: output quality monitoring, decision auditing, drift detection
Auditability
Full code traceability
Black-box decisions require explainability layers

The Implementation Roadmap: Four Phases

Whether you’re retrofitting AI agents into an existing DORA framework or building from scratch, this phased approach keeps things manageable.

Phase 1
Discovery & Gap Analysis (Months 1–2)
Inventory all AI agents and AI-assisted processes. Map each to DORA requirements. Identify gaps in your current ICT risk framework — particularly around asset classification, third-party contracts, and testing coverage. Assess BaFin's current supervisory expectations through published guidance and industry dialogue.
Phase 2
Framework Enhancement (Months 3–5)
Update your ICT risk management framework to explicitly address AI agents. Revise policies for: asset classification (include AI agents as ICT assets), change management (cover model updates and retraining), incident classification (add AI-specific failure scenarios), and third-party risk (renegotiate AI vendor contracts to meet Article 30 requirements).
Phase 3
Testing & Validation (Months 6–8)
Execute resilience testing that covers AI-specific scenarios. Run adversarial testing against AI agents. Conduct tabletop exercises for AI-related incidents. Validate exit strategies for critical AI vendors. Test incident reporting workflows with AI-specific scenarios. Document results for BaFin review.
Phase 4
Continuous Governance (Ongoing)
Embed AI risk management into BAU operations. Establish continuous monitoring for agent behaviour and model drift. Maintain living documentation. Conduct periodic reviews aligned with BaFin's supervisory cycle. Participate in sector-level information sharing on AI threats. Report to the management body quarterly.

BaFin’s Evolving Supervisory Stance

BaFin has not published AI-specific DORA guidance as of early 2026, but the direction is clear. Through its supervisory practice and published communications, BaFin expects:

  • Proportionality with teeth. Smaller institutions using AI agents for non-critical functions face lighter requirements, but “non-critical” is BaFin’s determination, not yours. Document your classification rationale thoroughly.
  • Management body accountability. DORA Article 5(2) makes the management body ultimately responsible for ICT risk. If your board approved AI agent deployment, they own the risk. Ensure they understand what that means.
  • Cross-regulation coherence. BaFin supervises under DORA, MaRisk (updated 2024), BAIT, and increasingly the EU AI Act. Your AI governance framework should address all four without creating contradictory controls.

Where NIS2 Meets DORA

For financial entities, DORA is lex specialis — it takes precedence over NIS2’s general cybersecurity requirements. But the overlap matters in practice:

  • Supply chain security under NIS2 reinforces DORA’s third-party risk requirements. Use NIS2 compliance of your vendors as additional due diligence evidence.
  • Incident reporting timelines differ (NIS2: 24h early warning, DORA: 72h initial notification). If you’re subject to both for different activities, maintain clear reporting procedures for each.
  • Your AI vendors may be directly subject to NIS2 as digital service providers, giving you additional contractual leverage.

Start Now, Not Later

AI agents aren’t a future consideration for DORA compliance — they’re a present reality. Every week you operate AI agents without DORA-aligned controls is a week of unmanaged regulatory risk.

The good news: DORA’s framework is flexible enough to accommodate AI-specific risks. You don’t need a separate AI regulation to get this right. You need a thorough interpretation of existing requirements, applied to new technology.

The practical steps are clear. Inventory your agents. Update your framework. Test rigorously. Manage your vendors. Report incidents properly.

Need help aligning your AI operations with DORA requirements? Our GRC & Compliance practice specialises in helping financial institutions bridge the gap between regulatory frameworks and emerging technology. Get in touch — we’ll start with a gap analysis and build from there.

Need help with this?

We help enterprise security teams implement what you just read — from strategy through AI-powered automation. First strategy session is free.

More Insights