Post

Governance Frameworks -- ISO 42001 and NIST AI RMF

The EU AI Act tells you what you must do. ISO 42001 and NIST AI RMF tell you how to do it systematically. The practical approach: EU AI Act for legal compliance, ISO 42001 for structured implementation, NIST AI RMF for deeper risk management methodology.

Governance Frameworks -- ISO 42001 and NIST AI RMF

The EU AI Act tells you what you must do. ISO 42001 and NIST AI RMF tell you how to do it systematically. For MMS, the practical approach is: EU AI Act for legal compliance (mandatory), ISO 42001 for structured implementation and certification (advisable), NIST AI RMF for deeper risk management methodology (recommended).


Framework Landscape

  EU AI Act ISO/IEC 42001 NIST AI RMF
Type Binding law Certifiable standard Voluntary guidance
Origin European Union (2024) ISO/IEC (Dec 2023) US NIST (Jan 2023)
Scope AI systems in/affecting EU market Any organization managing AI globally Any organization (sector-agnostic)
Enforcement Fines up to EUR 35M / 7% turnover Voluntary; certification via accredited auditors None; voluntary adoption
Approach Risk classification + prescriptive rules Management system (Plan-Do-Check-Act) Risk management functions (Govern-Map-Measure-Manage)
Certification Conformity assessment (high-risk only) Yes (third-party audit, BS ISO/IEC 42006:2025) No (self-assessment)
Updates Amendments via EU legislative process Periodic ISO revisions NIST RMF 1.1 guidance + profiles through 2026

ISO/IEC 42001 – AI Management System

What It Is

ISO/IEC 42001 is an international standard for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS). Published December 2023, it follows the same management system model as ISO 27001 (information security) and ISO 9001 (quality management).

It focuses on the management structure around AI systems – not on the AI technology itself. This means it works regardless of whether you’re using LLMs, traditional ML, or rule-based AI.

Structure

ISO 42001 follows the Annex SL high-level structure (shared by all modern ISO management system standards):

Clause Topic AI-Specific Focus
4 Context of the organization AI strategy alignment, stakeholder needs, AIMS scope
5 Leadership Management commitment to responsible AI, AI policy
6 Planning AI risk assessment, objectives, treatment of risks and opportunities
7 Support Resources, competence, awareness, communication, documented information
8 Operation AI system lifecycle management, impact assessment, data management, third-party relationships
9 Performance evaluation Monitoring, measurement, internal audit, management review
10 Improvement Nonconformity, corrective action, continual improvement

Key Requirements

  • AI Policy: Documented policy aligned with organizational strategy, communicated to all relevant parties
  • Risk Assessment: Systematic identification and assessment of AI risks across the lifecycle
  • Impact Assessment: Evaluation of potential impacts on individuals, groups, and society
  • Data Management: Controls for data quality, provenance, bias, and privacy throughout the AI lifecycle
  • Transparency: Documentation of AI system capabilities, limitations, and decision-making processes
  • Human Oversight: Mechanisms for human monitoring, intervention, and override
  • Third-Party Management: Controls for AI systems and components provided by external parties
  • Continual Improvement: Regular review and improvement of the AIMS based on performance data

Certification

As of 2025, certification is available through accredited auditors under BS ISO/IEC 42006:2025. Certification demonstrates to regulators, customers, and partners that the organization has a functioning AI governance system.

Why it matters for MMS: If MMS already has ISO 27001 (information security), the management system structure is familiar. ISO 42001 builds on the same foundation, making adoption significantly easier than starting from scratch.


NIST AI Risk Management Framework

What It Is

The NIST AI RMF (v1.0, January 2023) is a voluntary framework developed by the US National Institute of Standards and Technology. It provides a flexible, structured approach to managing AI risks throughout the AI system lifecycle.

Unlike ISO 42001 (which prescribes management system requirements), NIST AI RMF is a risk management methodology – it helps you think through risks systematically without prescribing specific controls.

Four Core Functions

1
2
3
4
5
6
7
8
9
10
11
12
13
14
┌──────────────────────────────────────────────────┐
│                    GOVERN                         │
│  Policies, roles, culture, accountability         │
│  (cross-cutting — informs all other functions)    │
└──────────────────────────────────────────────────┘
        │               │               │
        v               v               v
  ┌──────────┐   ┌──────────┐   ┌──────────┐
  │   MAP    │   │ MEASURE  │   │  MANAGE  │
  │          │   │          │   │          │
  │ Context, │   │ Analyze, │   │ Respond, │
  │ risks,   │   │ assess,  │   │ recover, │
  │ impacts  │   │ track    │   │ prioritize│
  └──────────┘   └──────────┘   └──────────┘
Function Purpose Key Activities
Govern Establish governance structure Define roles, policies, accountability. Foster risk-aware culture. Align with organizational values.
Map Contextualize risks Identify stakeholders, intended use, deployment context. Map potential impacts. Understand technical limitations.
Measure Assess and track risks Develop metrics, test for bias/fairness, evaluate performance, monitor for drift.
Manage Respond to identified risks Prioritize risks, implement mitigations, plan for incidents, allocate resources.

Profiles and Companions

NIST has published companion resources that extend the base framework:

  • AI RMF Playbook: Suggested actions for each subcategory
  • AI RMF Crosswalks: Mapping to ISO 42001, OECD AI Principles, EU AI Act
  • Generative AI Profile (NIST AI 600-1): Specific guidance for generative AI risks (hallucination, CBRN, data privacy, confabulation)
  • Critical Infrastructure Profile (April 2026): Guidance for AI in critical infrastructure operations

Why It Matters (Even in the EU)

NIST AI RMF is not legally required in the EU, but it offers the most detailed risk management methodology available. Many EU organizations reference it because:

  • The EU AI Act requires risk management but doesn’t prescribe a specific methodology
  • NIST provides granular, actionable guidance that ISO 42001 doesn’t
  • Regulators and auditors recognize NIST as a credible framework
  • For global companies, NIST alignment helps with US operations/partnerships

Framework Crosswalk

How the three frameworks map to each other:

EU AI Act Requirement ISO 42001 Control NIST AI RMF Function
Risk management system (Art. 9) Clause 6 (Planning), Annex B (controls) MAP, MEASURE
Data governance (Art. 10) Clause 8 (Operation - data management) MAP 2.3, MEASURE 2.6
Technical documentation (Art. 11, Annex IV) Clause 7.5 (Documented information) MAP 1.1, GOVERN 1.4
Record-keeping (Art. 12) Clause 9 (Performance evaluation) MEASURE 2.1, MANAGE 4.1
Transparency (Art. 13) Clause 8 (transparency controls) MAP 3.5, GOVERN 1.2
Human oversight (Art. 14) Clause 8 (human oversight controls) GOVERN 1.3, MANAGE 3.2
Accuracy, robustness (Art. 15) Clause 8 (quality objectives) MEASURE 2.5, MEASURE 2.7
Post-market monitoring (Art. 72) Clause 9, 10 (evaluation, improvement) MANAGE 4.2, MEASURE 3.3
Conformity assessment (Art. 43) Full AIMS certification Full framework implementation

Key insight

Implementing ISO 42001 systematically satisfies most EU AI Act requirements. The NIST AI RMF fills in the “how to think about risks” methodology that neither the Act nor ISO prescribes in detail.


Practical Approach for MMS

Priority Framework Why Timeline
1 (Mandatory) EU AI Act Legal requirement. Non-compliance = fines up to 7% turnover. By Aug 2, 2026
2 (Advisable) ISO 42001 Provides structured implementation path. Certification = auditable evidence of governance. Integrates with existing ISO certifications (27001, 9001). 6-12 months after Act compliance
3 (Recommended) NIST AI RMF Deeper risk management methodology. Useful for filling gaps where ISO 42001 is generic. Generative AI Profile directly relevant. Ongoing reference

Implementation path

  1. Start with EU AI Act compliance – classify systems, implement transparency, document high-risk systems
  2. Structure the work using ISO 42001 – use AIMS framework to organize policies, processes, and controls
  3. Deepen risk analysis using NIST – apply Govern/Map/Measure/Manage to each AI system, reference GenAI Profile for LLM-specific risks
  4. Seek ISO 42001 certification when governance maturity justifies the investment

References

This post is licensed under CC BY 4.0 by the author.