Your NIST 800-53 Controls Already Cover AI Agents
Every week I see another “AI Security Framework” launch. Another PDF. Another acronym. Another set of controls that looks suspiciously like what we’ve been doing for the last 20 years — just with “AI” bolted onto the front.
Here’s what I tell every CISO who asks me about securing AI agents: you already have the controls. You’re just not mapping them.
The Framework You Already Own
NIST 800-53 Rev 5 has 1,189 controls across 20 families. When I sat down and mapped which ones apply to AI agent deployments, I found 47 controls across 14 families that directly govern how AI agents should operate in your environment.
Not theoretically. Practically.
Let me walk you through the ones that matter most.
Access Control (AC) — The Foundation
AI agents need identities. They need permissions. They need access boundaries. Everything in the AC family applies:
- AC-2 (Account Management): Every AI agent gets a service account. Documented. Reviewed quarterly. Deprovisioned when retired. This isn’t new — it’s what you do for every service account.
- AC-3 (Access Enforcement): Least privilege. Your AI agent that summarises support tickets doesn’t need write access to your HRIS. Enforce it the same way you enforce it for humans.
- AC-6 (Least Privilege): This is where most organisations fail with AI. They give agents broad API access because it’s easier. Don’t. Scope it. Document it. Review it.
Audit and Accountability (AU) — Trust But Verify
If you can’t audit what an AI agent did, you can’t trust it. Period.
- AU-2 (Event Logging): Log every action your AI agent takes. Every API call. Every data access. Every decision. This is non-negotiable.
- AU-3 (Content of Audit Records): What, when, where, who triggered it, what was the outcome. Same requirements as any system.
- AU-6 (Audit Review, Analysis, and Reporting): Someone needs to actually look at those logs. Automate the review if you want — but don’t skip it.
Configuration Management (CM) — Version Everything
AI agents aren’t static. Their prompts change. Their models update. Their tool access evolves.
- CM-2 (Baseline Configuration): Document what your agent is supposed to do. Its prompt. Its model version. Its tool access. That’s your baseline.
- CM-3 (Configuration Change Control): Changed the system prompt? That’s a configuration change. Log it. Approve it. Test it.
- CM-7 (Least Functionality): Don’t give your AI agent access to tools it doesn’t need. This is the principle of least functionality applied to AI — nothing radical.
Identification and Authentication (IA) — Who Is This Agent?
Non-human identity management is the next big challenge. But IA controls already cover it:
- IA-2 (Identification and Authentication): Authenticate your AI agents. API keys, OAuth tokens, managed identities — use your existing IAM infrastructure.
- IA-4 (Identifier Management): Unique identifiers for each agent instance. Not shared credentials. Not “the AI service account.” Individual, traceable, auditable identities.
Risk Assessment (RA) — Know Your Exposure
- RA-3 (Risk Assessment): Before you deploy an AI agent, assess the risk. What data does it access? What actions can it take? What happens if it hallucinates? This isn’t AI-specific risk assessment — it’s risk assessment. Applied to AI.
- RA-5 (Vulnerability Monitoring and Scanning): Prompt injection is a vulnerability. Model poisoning is a vulnerability. Scan for them the same way you scan for SQL injection.
System and Communications Protection (SC)
- SC-7 (Boundary Protection): Your AI agents operate within trust boundaries. Define them. Enforce them. Monitor them.
- SC-8 (Transmission Confidentiality): Encrypt communications between agents and the systems they interact with. TLS everywhere. No exceptions.
- SC-28 (Protection of Information at Rest): If your agent caches data, stores embeddings, or maintains conversation history — protect it at rest.
Incident Response (IR) — When Things Go Wrong
- IR-4 (Incident Handling): What’s your playbook when an AI agent goes rogue? When it accesses data it shouldn’t? When it generates harmful output? You need an AI-specific runbook — but it fits within your existing IR framework.
- IR-6 (Incident Reporting): AI incidents need to be reported through the same channels as any other security incident.
System and Information Integrity (SI)
- SI-4 (System Monitoring): Monitor AI agent behaviour. Anomaly detection applies here — unusual access patterns, unexpected data volumes, off-hours activity.
- SI-10 (Information Input Validation): Validate what goes into your AI agents. This is your prompt injection defence. Input validation isn’t new — the attack vector is.
The Practical Approach
Here’s what I recommend to every CISO:
- Inventory your AI agents. All of them. Including the ones your developers deployed without telling you.
- Map them to existing controls. Use the 47 controls I’ve identified. Don’t create a new framework.
- Identify gaps. You’ll find them in non-human identity management (IA), configuration management for prompts (CM), and AI-specific incident response (IR).
- Close the gaps. Update your existing control implementations. Don’t build a parallel governance structure.
- Audit it. Same cadence as everything else.
Stop Building New Frameworks
The security industry loves creating new frameworks. It justifies conferences, consulting engagements, and certification programmes.
But the truth is simpler: NIST 800-53 already covers AI agents. ISO 27001 Annex A already covers AI agents. Your existing controls already cover AI agents.
You just need to map them. And then actually implement them.
That’s not as exciting as a new framework launch. But it’s what works.
Need help with this?
We help enterprise security teams implement what you just read — from strategy through AI-powered automation. First strategy session is free.