← Back to Insights
Security ArchitectureAI SecuritySABSATOGAFFramework

Security Architecture in the AI Era: From SABSA and TOGAF to Agent Trust Boundaries

Alexandre Medarov · February 15, 2026 · 7 min read

The Frameworks Haven’t Changed. The Threat Model Has.

Security architecture is not new. SABSA has been around since the late 1990s. TOGAF predates the smartphone. The Open Security Architecture (OSA) project mapped technical controls before cloud was mainstream. These frameworks work. They’ve been stress-tested across industries, audits, and real breaches.

What’s new is the actor.

AI agents are now operating inside enterprise environments — reading emails, querying databases, making decisions, triggering workflows. They’re not users. They’re not services. They’re not infrastructure. They’re something else entirely: autonomous decision-makers with access to sensitive data and the ability to act on it.

Your security architecture was designed for humans and systems. It now has a third category of tenant, and most organisations haven’t updated the blueprint.

This article covers what security architecture is, why the classic frameworks still matter, and what specifically changes when AI agents enter the picture. If you’re a CISO, security architect, or enterprise architect trying to figure out where AI fits in your existing security model — this is for you.


What Is Security Architecture?

If you’ve read our previous work, you know the analogy: security architecture is to information security what building architecture is to a physical structure.

You wouldn’t construct a 40-storey office tower without structural plans, load calculations, fire escape routes, and building codes. You wouldn’t let each floor’s contractor decide independently where to put the load-bearing walls. That’s how buildings collapse.

Security architecture is the structural plan for how an organisation protects its information assets. It defines:

  • What needs protecting (data, systems, processes)
  • Why it needs protecting (business risk, regulatory requirements, contractual obligations)
  • How it’s protected (controls, policies, technical mechanisms)
  • Where boundaries exist (trust zones, network segments, data classification tiers)
  • Who is responsible (roles, governance, accountability)

Without it, you get ad-hoc security — firewalls here, encryption there, an access policy nobody follows, and a penetration test report that reads like a horror novel.

In 2026, your architecture has a new tenant: AI agents. And they don’t fit neatly into any of the categories you’ve already defined. More on that in a moment. First, let’s revisit why the classic frameworks still hold up.


The Three Frameworks — Still Relevant

There are three frameworks that dominate enterprise security architecture. Each solves a different problem. None of them is complete on its own.

SABSA — Business-Driven Security

The Sherwood Applied Business Security Architecture (SABSA) starts where security should always start: with business requirements.

SABSA doesn’t ask “what firewall should we buy?” It asks “what does the business need to protect, and why?” It then works backwards through six layers — from contextual (business view) to component (technical implementation) — to derive security requirements from business attributes.

What SABSA delivers:

  • Security policies derived from business risk, not vendor feature lists
  • A traceability matrix linking every control to a business requirement
  • Risk and opportunity management at the board level
  • A common language between security and business leadership

Where SABSA stops: It’s a methodology, not a technical blueprint. It tells you what to protect and why, but it doesn’t hand you network diagrams or control specifications. You need something else for that.

TOGAF — Enterprise Architecture Alignment

The Open Group Architecture Framework (TOGAF) is an enterprise architecture framework. Security isn’t its primary focus — but that’s precisely why it matters.

TOGAF ensures that security architecture doesn’t exist in isolation. It aligns security with business architecture, application architecture, data architecture, and technology architecture through the Architecture Development Method (ADM). Security decisions are made in context, not in a vacuum.

What TOGAF delivers:

  • Security integrated into the enterprise architecture lifecycle
  • Alignment between security investments and business transformation
  • Architecture governance that prevents shadow IT (and now, shadow AI)
  • Standards-based interoperability across the technology estate

Where TOGAF stops: TOGAF is broad. It covers everything from application portfolios to migration planning. Its security content is necessary but not sufficient — you won’t find granular control mappings or threat modelling techniques here.

OSA — Technical Controls

The Open Security Architecture (OSA) project fills the gap the other two leave. It provides pattern-based technical security architectures — the actual control landscapes, network security patterns, and identity management models that engineers implement.

What OSA delivers:

  • Technical control patterns for common scenarios (perimeter security, identity federation, data protection)
  • Reference architectures that can be adapted to specific environments
  • A controls catalogue mapped to threat categories
  • Visual architecture patterns that translate policy into implementation

Where OSA stops: OSA is technical and tactical. It doesn’t connect controls to business risk (that’s SABSA’s job) or ensure alignment with enterprise strategy (that’s TOGAF’s job).

Why You Need All Three

Each framework is a lens. SABSA gives you the why, TOGAF gives you the where (in the enterprise context), and OSA gives you the how. Used together, they create a security architecture that is business-aligned, enterprise-integrated, and technically sound.

This hasn’t changed. What’s changed is what these frameworks now need to account for.


The AI Agent Layer: What Changes Now

AI agents introduce architectural challenges that don’t map cleanly to existing models. Here’s what’s different and why your current security architecture needs an update.

Trust Boundaries Need a New Model

Traditional trust boundaries are zone-based. You have your internal network, your DMZ, your cloud VPCs, your third-party integrations. Traffic crosses boundaries, and controls (firewalls, API gateways, WAFs) enforce policy at each crossing.

AI agents break this model. A single agent might:

  • Read from a CRM in one trust zone
  • Query a financial database in another
  • Call an external API in a third
  • Write results to a collaboration platform in a fourth

All within a single task execution. All autonomously.

The agent isn’t “in” a zone — it traverses zones. And it does so with a speed and breadth that no human user would. Your existing zone-based controls see individual API calls, but they don’t see the agent’s intent or the aggregate data exposure across those calls.

What your architecture needs: A trust boundary model for agents that accounts for cross-zone traversal, cumulative data access, and the principle of least privilege applied per-task, not per-identity.

Non-Human Identities Need Governance

AI agents need identities. They authenticate to systems, access data, and perform actions. In most organisations today, agents run on service accounts, API keys, or OAuth tokens that were designed for traditional system-to-system integration.

The problem: service accounts don’t have agency. They execute what they’re told. AI agents decide what to do — within the parameters of their instructions, but with significant latitude in how they accomplish a goal.

This creates a new identity category: the Non-Human Identity (NHI) with autonomous decision-making capability. These identities need:

  • Lifecycle management — provisioning, rotation, deprovisioning, just like human identities
  • Least-privilege enforcement — scoped to the specific task, not a broad role
  • Behavioural monitoring — detecting when an agent’s actions deviate from expected patterns
  • Attestation — proving which agent did what, with which version of which model, using which instructions

Most IAM systems weren’t designed for this. They handle human identities and service accounts. Agent identities sit somewhere in between, and your architecture needs to address them explicitly.

Data Flow Architecture Needs Rethinking

Your Data Loss Prevention (DLP) strategy probably monitors endpoints, email, cloud storage, and maybe API traffic. It looks for patterns: credit card numbers, personal data, classified documents leaving the perimeter.

AI agents create a data flow that DLP doesn’t see. An agent can:

  • Query a database, extract relevant records, summarise them, and send the summary to a user — without the raw data ever hitting a DLP-monitored channel
  • Combine data from multiple sources into a new dataset that didn’t exist before — creating derived sensitive data that wasn’t classified
  • Send data to an external model API for processing — effectively exfiltrating data through an inference call

Your data architecture needs to account for agent-mediated data flows. This means classifying data not just at rest and in transit, but at the point of agent access. What can the agent see? What can it derive? Where does the output go?

Decision Authority Needs Explicit Boundaries

This is the architectural challenge most organisations haven’t even started thinking about.

When a human makes a decision — approve a transaction, grant access, respond to a customer — there’s an implicit accountability chain. The human is trained, authorised, and (in theory) exercises judgement.

When an agent makes a decision, that chain breaks. Who’s accountable when an AI agent approves a refund it shouldn’t have? Or when it grants database access based on a misinterpreted request? Or when it sends sensitive data to the wrong recipient because the prompt was ambiguous?

Your security architecture needs a decision authority framework that explicitly defines:

Decision TypeAgent AuthorityHuman Required
Read-only data retrievalAutonomousNo
Data modification (low risk)Autonomous with loggingNo
Data modification (high risk)Proposal onlyYes — approval required
Access provisioningProposal onlyYes — approval required
External communicationAutonomous within templatesYes — for freeform
Financial transactionsProposal onlyYes — above threshold

This isn’t a technical control — it’s an architectural decision that shapes how agents are built, deployed, and monitored.

NIST 800-53 Already Covers More Than You Think

Here’s the good news: you don’t need to invent new control frameworks for AI agents. NIST 800-53 Rev. 5 already contains 47 controls across 14 families that are directly applicable to AI agent governance. We’ve mapped these in detail in our NIST framework analysis for AI agents.

The key families:

  • AC (Access Control) — agent authentication, least privilege, session management
  • AU (Audit & Accountability) — logging autonomous actions, non-repudiation
  • CA (Assessment, Authorization & Monitoring) — continuous monitoring of agent behaviour
  • IA (Identification & Authentication) — NHI credential management
  • RA (Risk Assessment) — AI-specific risk categories
  • SC (System & Communications Protection) — agent-to-system communication security
  • SI (System & Information Integrity) — prompt injection detection, output validation

The controls exist. What’s missing in most organisations is the application of these controls to the agent context. That’s an architecture problem, not a controls problem.

ISO 42001: The AI Management System Standard

ISO/IEC 42001 is the first international standard for AI Management Systems (AIMS). Published in 2023, it provides a framework for organisations to manage AI systems responsibly — covering governance, risk management, data quality, transparency, and accountability.

For security architects, ISO 42001 sits alongside ISO 27001 the same way AI agents sit alongside traditional IT systems: it’s an extension, not a replacement.

What 42001 adds to your architecture:

  • AI-specific risk assessment methodology
  • Requirements for AI system documentation and traceability
  • Data governance requirements specific to AI training and inference
  • Bias and fairness controls (relevant for agents making decisions about people)
  • Transparency requirements — users must know when they’re interacting with AI

If you’re already ISO 27001 certified, 42001 integration is the logical next step. The management system structure (Plan-Do-Check-Act) is identical. The controls are complementary. And your auditors will ask about it — if they haven’t already.


What Your Architecture Needs Now

Here’s the practical checklist. If you’re a security architect or CISO, these are the deliverables you should be working on.

1. Agent Inventory

You can’t secure what you don’t know about. Catalogue every AI agent operating in your environment:

  • What model does it use?
  • What data does it access?
  • What actions can it take?
  • Who deployed it? Who maintains it?
  • Is it sanctioned or shadow AI?

This is the AI equivalent of an asset register. Without it, everything else is guesswork.

2. Trust Zone Mapping for Agents

Extend your existing trust zone model to account for agent traversal:

  • Which zones does each agent cross?
  • What data moves between zones via agent actions?
  • What controls enforce policy at each crossing?
  • Are cumulative access patterns monitored?

3. Non-Human Identity Governance Framework

Define how agent identities are managed:

  • Provisioning and deprovisioning processes
  • Credential rotation schedules
  • Least-privilege scoping per task (not per agent)
  • Behavioural baselines and anomaly detection
  • Regular access reviews — just like human identities

4. Data Classification for Agent Access

Extend your data classification scheme:

  • Which classification levels can agents access?
  • What derived data can agents create, and how is it classified?
  • Where does agent-processed data go? Is the output channel monitored?
  • Are model API calls treated as data transfers for DLP purposes?

5. Prompt Injection Defence Patterns

Prompt injection is the SQL injection of the AI era. Your architecture needs defence-in-depth:

  • Input validation and sanitisation for agent inputs
  • System prompt isolation (agents should not expose their instructions)
  • Output filtering for sensitive data leakage
  • Sandboxed execution environments for untrusted inputs
  • Regular red-teaming of agent workflows

6. Human-in-the-Loop Decision Framework

Define the decision authority model:

  • Which decisions can agents make autonomously?
  • Which require human approval?
  • What’s the escalation path when an agent is uncertain?
  • How are overrides logged and reviewed?

7. Audit Logging for Autonomous Actions

Every autonomous action needs a log entry that captures:

  • What the agent did
  • Why (the input/prompt that triggered it)
  • What data it accessed
  • What decision it made
  • What the outcome was
  • Which model version was used

This isn’t just for compliance. It’s for incident response. When something goes wrong with an agent, you need a forensic trail that’s at least as detailed as what you’d have for a human user.


What dig8ital Delivers

We’ve been combining SABSA, TOGAF, and OSA into unified security architectures for years. It’s what we do. What’s changed is that we’ve extended this approach to cover the AI agent layer — because we build and operate AI agents ourselves.

Our platform runs autonomous agents that handle security assessments, compliance mapping, and architectural analysis. Every architectural principle in this article is one we apply to our own systems. We don’t just advise on agent trust boundaries — we enforce them in production, every day.

That means when we help you design your security architecture for the AI era, we bring:

  • Framework expertise — SABSA business-risk traceability, TOGAF enterprise alignment, OSA technical controls
  • AI-specific patterns — trust boundary models, NHI governance, decision authority frameworks
  • Operational experience — we run agents in production, so we know what breaks and what holds
  • Standards mapping — NIST 800-53, ISO 27001, ISO 42001, integrated into a single architecture

Where to Start

If you’ve read this far, you probably fall into one of two categories:

Category A: You have a mature security architecture based on SABSA, TOGAF, or both, and you need to extend it for AI agents. Good — the foundations are solid. The work is in adding the agent layer, NHI governance, and decision authority frameworks.

Category B: You’re building security architecture from scratch because AI adoption forced the conversation. Also fine — but don’t skip the fundamentals. SABSA, TOGAF, and OSA aren’t optional just because AI is the shiny new thing. They’re the foundation that makes AI governance possible.

Either way, the next step is the same: understand where AI sits in your current architecture and what gaps exist.

Not sure where AI fits in your security architecture? Start with our free CISO AI Readiness Assessment →

We’ll map your current state, identify the gaps, and give you a prioritised roadmap — no fluff, no 80-page PDF nobody reads. Just actionable architecture guidance.

Need help with this?

We help enterprise security teams implement what you just read — from strategy through AI-powered automation. First strategy session is free.

More Insights