Back to Insights
ISO 42001AI GovernanceComplianceFrameworkISO 27001

ISO 42001 for Practitioners: What the AI Management System Standard Actually Requires

January 20, 2026 · 7 min read

Everyone’s talking about ISO 42001. Nobody’s implementing it.

Here’s why: the standard is written for certification bodies, not practitioners. It’s 30+ pages of management-system language that assumes you already speak fluent ISO. The guidance documents are worse — abstract principles wrapped in normative clauses, with zero practical examples.

I’ve spent the last year helping organisations implement ISO 42001. Not theoretically. Actually implement it — build the policies, run the risk assessments, map the controls, survive the audits. What follows is the practical version of this standard. What you actually need to do, in what order, and how it connects to frameworks you probably already have.

Let me translate it.

What is ISO 42001?

ISO/IEC 42001:2023 is the world’s first international standard for an AI Management System (AIMS). Published in December 2023, it provides a framework for organisations that develop, provide, or use AI systems to do so responsibly.

Think of it as ISO 27001, but for AI.

30+
Pages in the standard
PDCA
Same cycle as ISO 27001
2023
Published December
1st
International AI standard

Where ISO 27001 gives you a management system for information security, ISO 42001 gives you a management system for artificial intelligence. Same structural DNA. Same Plan-Do-Check-Act cycle. Same clause structure (clauses 4 through 10). Different focus.

ISO 27001 asks: “How do you protect information?” ISO 42001 asks: “How do you govern AI responsibly?”

The key word is responsibly. This isn’t just about whether your AI works. It’s about whether your AI is fair, transparent, accountable, safe, and aligned with societal values. That’s a fundamentally different risk surface than information security — and it requires fundamentally different controls.

Who Needs It?

Three categories of organisations:

  1. AI Developers — If you build AI models or systems, ISO 42001 provides the governance framework for your development lifecycle.
  2. AI Providers — If you offer AI-powered products or services, the standard covers how you manage, monitor, and maintain those systems.
  3. AI Users — If you deploy AI systems (even third-party ones) in your operations, ISO 42001 applies to how you govern that usage.

Most organisations fall into category 3. If you’re using GPT-4 in your customer service, running ML models for fraud detection, or deploying AI agents for compliance automation, ISO 42001 is relevant to you.

Certification vs Conformance

You don’t have to certify. Many organisations will implement ISO 42001 as an internal governance framework without pursuing formal certification. That’s perfectly valid.

But certification is coming fast as a market differentiator and — increasingly — as a procurement requirement. If your customers or regulators ask “how do you govern your AI?”, a certified AIMS is the strongest answer available.

The ISO 27001 Bridge: You’re Already 60% There

If your organisation already holds ISO 27001 certification, I have good news: you’ve done most of the structural work. ISO 42001 uses the same Harmonised Structure (HS) that underpins all modern ISO management system standards.

Here’s what carries over directly:

Identical Structural Elements:

  • Clause 4: Context of the organisation (interested parties, scope, AIMS boundaries)
  • Clause 5: Leadership (management commitment, policy, roles and responsibilities)
  • Clause 7: Support (resources, competence, awareness, communication, documented information)
  • Clause 9: Performance evaluation (monitoring, internal audit, management review)
  • Clause 10: Improvement (nonconformity, corrective action, continual improvement)

If you’ve built these for ISO 27001, you can extend them for ISO 42001. Same management review process. Same internal audit programme. Same document control system. Different scope, same scaffolding.

What’s Genuinely New:

Here’s where ISO 42001 diverges — and where the real implementation work lives:

New in 42001
AI-Specific Risk Assessment
Beyond CIA — assess bias, fairness, transparency, explainability, safety, societal impact. Different risk taxonomy entirely.
New in 42001
AI Impact Assessment
No ISO 27001 equivalent. Structured evaluation of AI impacts on individuals, groups, and society. Think DPIA, but broader.
New in 42001
Data Governance for AI
Not just data security — data quality, provenance, bias, and representativeness for AI training and operation.
New in 42001
Responsible AI Principles
Fairness, accountability, transparency, and human oversight become auditable commitments, not just buzzwords.
New in 42001
AI System Lifecycle Management
Governance from design through development, testing, deployment, monitoring, and retirement. Each phase has specific controls.

The integration opportunity is significant. Run both management systems on the same infrastructure. Use the same audit programme. Leverage the same risk management process. But recognise that AI governance introduces genuinely new dimensions that information security alone doesn’t cover.

What You Actually Need to Implement

Let’s break this down clause by clause. Not the ISO language — the practical reality.

Clause 4: Context and Scope

What the standard says: Understand your organisation’s context, identify interested parties, and define the scope of your AIMS.

What you actually need to do:

  • AI Inventory: List every AI system your organisation develops, provides, or uses. Yes, every one. Including the ones someone in marketing set up with ChatGPT. Including the ML model that engineering built two years ago and nobody’s touched since. This is your scope baseline. If you’ve read our piece on shadow AI, you know why this matters.
  • Stakeholder Map: Identify who’s affected by your AI systems. Not just internal stakeholders — external ones too. Customers whose data feeds your models. Communities affected by your AI-driven decisions. Regulators who oversee your industry.
  • Scope Definition: Define which AI systems and activities fall within your AIMS. Start focused. You don’t have to boil the ocean. Pick your highest-risk AI systems and expand from there.

Practical output: An AI system register (inventory), a stakeholder analysis document, and a formal AIMS scope statement.

Clause 5: Leadership and AI Policy

What the standard says: Top management must demonstrate commitment and establish an AI policy.

What you actually need to do:

Your AI policy is the cornerstone document. It needs to cover:

  • Responsible AI principles — Your organisation’s commitment to fairness, transparency, accountability, safety, and human oversight. These can’t be vague. “We are committed to ethical AI” means nothing. “All AI systems processing personal data will include explainability mechanisms that allow affected individuals to understand automated decisions” means something auditable.
  • Bias and fairness — How you identify, assess, and mitigate bias in AI systems. What fairness means in your context (statistical parity? equal opportunity? individual fairness?). Who’s responsible for bias monitoring.
  • Transparency — What information you provide about AI system usage. To whom. At what level of detail. How you handle the tension between transparency and intellectual property.
  • Accountability — Clear ownership for every AI system. Escalation paths. Decision authority for AI risk acceptance. Board-level oversight mechanisms.
  • Human oversight — Where humans remain in the loop. What decisions require human approval. How override mechanisms work.

Practical output: An AI policy document (2–5 pages), endorsed by top management, communicated to all relevant staff.

Pro tip: Don’t write a separate AI policy from scratch. Extend your existing information security policy or create an AI-specific annex. Same governance structure, AI-specific content.

Clause 6: Planning — AI Risk Assessment

What the standard says: Establish an AI risk assessment process and determine AI-related risks and opportunities.

What you actually need to do:

This is where ISO 42001 gets genuinely different from ISO 27001. Your information security risk assessment evaluates threats to confidentiality, integrity, and availability. Your AI risk assessment needs to evaluate a broader — and often more subjective — set of risks:

AI-Specific Risk Categories:

  • Bias and discrimination — Could the AI system produce unfair outcomes for certain groups? Based on what protected characteristics? How would you detect this?
  • Safety and reliability — Could the AI system cause physical or psychological harm? What happens when it fails? What’s the failure mode?
  • Transparency and explainability — Can affected individuals understand how decisions are made? Can you explain the AI system’s logic to a regulator?
  • Privacy and data protection — Beyond GDPR compliance (which you should already have), what are the AI-specific privacy risks? Model memorisation? Inference attacks? Training data exposure?
  • Societal and environmental impact — What are the broader impacts? Job displacement? Environmental cost of training? Power concentration?
  • Accountability gaps — If the AI system makes a harmful decision, who’s responsible? Is that clearly defined?

Assessment methodology: You can extend your ISO 27001 risk assessment methodology, but you need to add these AI-specific criteria. Likelihood × Impact still works, but your impact scale needs to include societal harm, not just business impact.

Practical output: AI risk register (integrated with or linked to your existing risk register), AI risk assessment methodology document, risk treatment plans for high-rated AI risks.

Clause 8: AI System Lifecycle

What the standard says: Plan, implement, and control the processes needed for the AI system lifecycle.

What you actually need to do:

This is the operational heart of ISO 42001. Every AI system within your scope needs documented lifecycle governance:

Design Phase:

  • Requirements specification that includes responsible AI requirements (not just functional ones)
  • Data requirements and data quality criteria
  • Fairness and bias requirements (what metrics, what thresholds)
  • Stakeholder consultation for high-impact systems

Development Phase:

  • Data governance — provenance, quality, bias assessment of training data
  • Model development — documentation of architecture decisions, training processes, hyperparameter choices
  • Testing — not just accuracy, but fairness testing, robustness testing, edge case analysis
  • Validation — against responsible AI requirements, not just functional specs

Deployment Phase:

  • Deployment approval process (who authorises go-live for AI systems?)
  • User documentation and transparency disclosures
  • Monitoring plan — what metrics, what thresholds, what alerts
  • Rollback procedures — how do you pull an AI system if something goes wrong?

Monitoring Phase:

  • Ongoing performance monitoring (accuracy, fairness, drift)
  • Incident response for AI-specific incidents (biased outputs, harmful content, unexpected behaviour)
  • Regular reassessment of risks (your risk profile changes as the world changes)
  • Feedback mechanisms for affected individuals

Retirement Phase:

  • Decommissioning process
  • Data retention and deletion
  • Stakeholder notification
  • Knowledge capture (what did you learn?)

Practical output: AI system lifecycle procedure, documented lifecycle stages for each in-scope AI system, monitoring dashboards and alert configurations.

Annex A: The 38 AI-Specific Controls

Annex A is where ISO 42001 provides its control set — 38 controls across several categories. These are the specific measures you need to implement (or justify excluding). Here are the key groupings:

AI Policies and Organisation (Controls A.2–A.4):

  • AI policy establishment and communication
  • Internal organisation for AI governance
  • Roles, responsibilities, and authorities for AI

AI Risk Management (Controls A.5–A.7):

  • AI risk assessment processes
  • AI risk treatment
  • AI risk documentation and communication

AI System Development and Data (Controls A.8–A.12):

  • Data quality and data governance for AI
  • AI system design documentation
  • AI system testing and validation
  • Technical documentation requirements
  • Record-keeping for AI system decisions

AI Transparency and Explainability (Controls A.13–A.15):

  • Transparency disclosures to affected individuals
  • Explainability mechanisms for AI decisions
  • Communication about AI system capabilities and limitations

AI Operations and Monitoring (Controls A.16–A.20):

  • AI system monitoring and performance tracking
  • AI incident management
  • Bias monitoring and mitigation
  • AI system change management
  • Third-party AI governance

Human Oversight (Controls A.21–A.23):

  • Human-in-the-loop requirements
  • Override mechanisms
  • Escalation procedures

Not all 38 controls apply to every organisation. Like ISO 27001’s Statement of Applicability, you need to assess each control, implement what’s relevant, and justify what you exclude. The justification matters — auditors will challenge blanket exclusions.

Practical output: Statement of Applicability for AI controls, control implementation evidence for each applicable control.

Annex B: AI Impact Assessment

Annex B introduces the AI Impact Assessment (AIA) — a structured evaluation of how an AI system might affect individuals, groups, and society. This is not optional for high-risk AI systems.

When to conduct an AIA:

  • Before deploying any new AI system
  • When significantly modifying an existing AI system
  • When the operating context changes materially
  • Periodically for high-risk systems (annual minimum)

What an AIA covers:

  • Description of the AI system and its purpose
  • Identification of affected stakeholders
  • Assessment of potential positive and negative impacts
  • Evaluation of bias and fairness implications
  • Assessment of transparency and explainability
  • Evaluation of human oversight mechanisms
  • Risk mitigation measures
  • Residual impact assessment

If you’re familiar with DPIAs under GDPR, the AIA follows a similar structure but with a broader scope. A DPIA focuses on data protection impacts. An AIA covers societal impacts, fairness impacts, safety impacts, and more.

Practical output: AIA template, completed AIAs for each high-risk AI system within scope.

The EU AI Act Connection

Here’s why ISO 42001 matters beyond the certification itself: it’s becoming the de facto compliance framework for the EU AI Act.

The EU AI Act (Regulation 2024/1689) establishes a risk-based regulatory framework for AI systems in the EU. For high-risk AI systems, the requirements are extensive: risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

Sound familiar? It should. ISO 42001 was designed with the EU AI Act in mind. The mapping isn’t perfect, but it’s the closest thing to a turnkey compliance framework that exists.

Key alignments:

  • EU AI Act Art. 9 (Risk management) → ISO 42001 Clause 6 + Annex A risk controls
  • EU AI Act Art. 10 (Data governance) → ISO 42001 Annex A data quality controls
  • EU AI Act Art. 11 (Technical documentation) → ISO 42001 Clause 8 lifecycle documentation
  • EU AI Act Art. 13 (Transparency) → ISO 42001 Annex A transparency controls
  • EU AI Act Art. 14 (Human oversight) → ISO 42001 Annex A human oversight controls
  • EU AI Act Art. 15 (Accuracy, robustness, cybersecurity) → ISO 42001 Annex A operations controls + ISO 27001

If you need EU AI Act compliance for high-risk systems, ISO 42001 is your implementation roadmap. Not the only one — but the most structured, auditable, and internationally recognised one.

The European Commission has indicated that harmonised standards (including ISO 42001 and its supporting standards) will provide a presumption of conformity with the EU AI Act. Translation: if you’re ISO 42001 certified, demonstrating EU AI Act compliance becomes significantly easier.

Cross-Framework Mapping: One Control, Multiple Frameworks

One of the most powerful aspects of ISO 42001 is how it connects to other frameworks. If you’re managing multiple compliance obligations (and who isn’t?), cross-framework mapping eliminates redundant work.

Here’s how key ISO 42001 controls map across frameworks:

AI Risk Assessment

FrameworkRequirement
ISO 42001Clause 6.1 — AI risk assessment process
ISO 27001Clause 6.1.2 — Information security risk assessment (extend methodology)
NIST AI RMFGOVERN 1.0, MAP 1.0–5.0 — AI risk identification and assessment
EU AI ActArt. 9 — Risk management system for high-risk AI

Data Governance

FrameworkRequirement
ISO 42001Annex A data quality controls
ISO 27001A.8.1–A.8.12 — Asset management (partial overlap)
NIST AI RMFMAP 2.3, MEASURE 2.6 — Data quality and representativeness
EU AI ActArt. 10 — Data and data governance

Transparency

FrameworkRequirement
ISO 42001Annex A transparency and explainability controls
ISO 27001No direct equivalent
NIST AI RMFGOVERN 4.0, MAP 3.0 — Transparency and documentation
EU AI ActArt. 13 — Transparency and provision of information to deployers

Human Oversight

FrameworkRequirement
ISO 42001Annex A human oversight controls
ISO 27001No direct equivalent
NIST AI RMFGOVERN 1.0 — Human oversight governance
EU AI ActArt. 14 — Human oversight measures

The pattern is clear: implementing ISO 42001 well means you’re simultaneously addressing NIST AI RMF requirements and EU AI Act obligations. One set of controls, documented once, mapped to multiple frameworks. That’s the efficiency gain.

For a deeper look at how AI frameworks like NIST AI RMF apply to agentic systems, see our analysis of NIST and AI agents.

Our Compliance Agent handles this mapping automatically — when you implement a control for ISO 42001, it shows you the corresponding requirements across every other framework in your scope.

Getting Started: The 90-Day Roadmap

Theory is useful. A plan is better. Here’s a practical 90-day roadmap to get your ISO 42001 implementation off the ground.

Month 1: Foundation (Days 1–30)

Week 1–2: AI Inventory and Gap Assessment

  • Conduct a complete AI system inventory. Every AI system: developed, procured, used, experimental, retired-but-still-running. Include shadow AI — the systems nobody told IT about.
  • For each system, document: purpose, data inputs, decision outputs, affected stakeholders, current governance measures.
  • Run a gap assessment against ISO 42001 requirements. Where do you stand today? What’s missing?

Week 3–4: Scope and Stakeholder Analysis

  • Define your AIMS scope. Start with your highest-risk AI systems. You can expand later.
  • Identify and document interested parties — internal and external.
  • Brief top management. Get formal commitment. Without leadership buy-in, the rest is academic.

Deliverables: AI system register, gap assessment report, AIMS scope document, management commitment statement.

Month 2: Risk and Policy (Days 31–60)

Week 5–6: AI Risk Assessment

  • Develop (or extend) your risk assessment methodology to include AI-specific risk categories.
  • Conduct risk assessments for all in-scope AI systems.
  • Prioritise risks. Develop treatment plans for high-rated risks.

Week 7–8: Policy Development

  • Draft your AI policy. Cover responsible AI principles, bias and fairness commitments, transparency obligations, accountability structures, and human oversight requirements.
  • Draft your AI impact assessment procedure.
  • Begin developing operational procedures for AI system lifecycle management.

Deliverables: AI risk assessment methodology, completed risk assessments, AI risk register, AI policy (draft for approval), AIA procedure.

Month 3: Controls and Audit (Days 61–90)

Week 9–10: Control Implementation

  • Map ISO 42001 Annex A controls to your AI systems. Complete your Statement of Applicability.
  • Implement priority controls — focus on high-risk systems first.
  • Document evidence of control implementation.

Week 11–12: Internal Audit and Review

  • Conduct your first internal audit against ISO 42001 requirements.
  • Document nonconformities and observations.
  • Run your first management review — present findings, get decisions, plan next steps.

Deliverables: Statement of Applicability, control implementation evidence, internal audit report, management review minutes, corrective action plans.

After 90 Days

You now have a functioning AIMS. It’s not perfect — no management system is at 90 days. But you have the structure, the policies, the risk assessments, and the initial controls. From here, it’s continuous improvement:

  • Expand scope to additional AI systems
  • Mature your risk assessment process
  • Conduct AI impact assessments for high-risk systems
  • Build competence across the organisation
  • Consider formal certification (typically 6–12 months after initial implementation)

The Authority Gap

Here’s the honest truth about ISO 42001 right now: there’s a massive gap between the organisations talking about it and the organisations doing it. Conference panels discuss AI governance in the abstract. LinkedIn posts share the standard’s structure. Webinars give high-level overviews.

Very few people are in the trenches, actually building AIMS implementations, running AI risk assessments, and mapping controls across frameworks.

That gap is an opportunity. The organisations that implement ISO 42001 now — while their competitors are still debating whether they need it — will have a significant governance advantage. They’ll be ready for EU AI Act enforcement. They’ll win procurement decisions where AI governance is a criterion. They’ll manage AI risk effectively while others are still figuring out what risks they have.

The standard isn’t going away. The regulation isn’t optional. The question isn’t whether to implement ISO 42001 — it’s when.

Now is better than later.

See Cross-Framework Mapping in Action

Our Compliance Agent tracks ISO 42001 alongside ISO 27001, NIS2, DORA, EU AI Act, NIST AI RMF, and GDPR simultaneously. One control, mapped across every framework in your scope. Automatically.

No spreadsheets. No manual mapping. No consultant invoices.

See how it works →

Need help with this?

We help enterprise security teams implement what you just read — from strategy through AI-powered automation. First strategy session is free.

More Insights