Post

Enterprise AI Governance Playbook

The practical how-to for implementing AI governance in an enterprise. Connects legal requirements, classification frameworks, documentation standards, and compliance frameworks into an operational governance process.

Enterprise AI Governance Playbook

This is the practical “how to implement governance” for an enterprise. It connects the legal requirements (EU AI Act), classification framework (AI Risk Classification), documentation standards (AI Documentation), and compliance frameworks (ISO 42001 and NIST AI RMF) into an operational governance process.


Step 1: AI Inventory

You cannot govern what you don’t know exists. The AI inventory is the foundation of all governance.

What to Catalog

For every AI system (including third-party AI services, internal tools, and embedded AI features):

Field Example
System name Customer Support AI Agent v2
Owner AI Platform Team / [Name]
Description LLM-based conversational agent for customer support via web chat
AI type LLM (Claude Sonnet 4 via Vertex AI) + RAG + tool use
Foundation model provider Anthropic (via Google Vertex AI)
Deployment status Production / Staging / Development / Planned
Users External customers (Germany, Austria)
Data inputs Customer queries, product catalog, order database
Data outputs Text responses, order status, return initiation
Risk classification Limited risk (Article 50)
Personal data processed Yes – customer names, order IDs, interaction logs
Human oversight Human handoff available; escalation to support team
Documentation Model card: [link], Impact assessment: [link]
Last reviewed 2026-03-15

Inventory Template

1
2
3
4
5
6
| System | Owner | Type | Status | Risk Tier | Model Provider | Personal Data | Last Reviewed |
|--------|-------|------|--------|-----------|---------------|---------------|---------------|
| Customer Support Agent | AI Platform | LLM + RAG + tools | Prod | Limited | Anthropic/Vertex | Yes | 2026-03-15 |
| Product Q&A | AI Platform | LLM + RAG | Prod | Limited | Anthropic/Vertex | No | 2026-03-15 |
| Internal IT Helpdesk | IT Ops | LLM + tools | Staging | Limited | OpenAI/Azure | Yes | 2026-02-01 |
| Inventory Forecasting | Supply Chain | Traditional ML | Prod | Minimal | Internal | No | 2026-01-10 |

Cadence

  • New systems: Added to inventory before deployment (part of Gate 1)
  • Existing systems: Reviewed quarterly
  • Full audit: Annually

Step 2: Governance Board

Composition

Role Responsibility Why This Role
Engineering Lead (AI/Platform) Technical risk assessment, architecture review Understands what the AI does and how it can fail
Legal / Compliance Regulatory compliance, contractual obligations Ensures EU AI Act, GDPR, sector-specific rules are met
Data Privacy Officer (DPO) Personal data impact, DPIA GDPR mandate; AI governance intersects heavily with data protection
Business Stakeholder Use case justification, user impact Ensures AI serves genuine business needs and user value
Security Threat assessment, prompt injection, data leakage AI introduces new attack surfaces beyond traditional security
Product / UX (for consumer-facing) User experience, transparency design Transparency implementation affects product design

Responsibilities

  • Approve or reject new AI deployments via the review process (Step 3)
  • Review and update AI acceptable use policy annually
  • Escalation point for AI incidents and compliance concerns
  • Commission audits of high-risk AI systems
  • Report to executive leadership on AI governance posture

Cadence

  • Monthly: Review new AI systems submitted for approval, review monitoring reports
  • Quarterly: Review AI inventory, update risk classifications, assess emerging regulations
  • Ad-hoc: High-risk decisions, incidents, regulatory changes

Step 3: AI Review Process (Deployment Gates)

Every AI system must pass through a gate-based review before production deployment. The depth of review scales with risk tier.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
┌──────────────────────────────────────────────────────────────────┐
│ Gate 1: Risk Classification                                      │
│  - Classify system per AI Risk Classification                    │
│  - Add to AI inventory                                           │
│  - If MINIMAL → proceed with lightweight review (skip to Gate 5) │
│  - If LIMITED or HIGH → continue to Gate 2                       │
└──────────┬───────────────────────────────────────────────────────┘
           │
┌──────────┴───────────────────────────────────────────────────────┐
│ Gate 2: Documentation                                            │
│  - Model card completed                                          │
│  - Dataset datasheets (if custom training/fine-tuning)           │
│  - Impact assessment (mandatory for HIGH, recommended for        │
│    LIMITED)                                                      │
│  - System card (if multi-component agent system)                 │
└──────────┬───────────────────────────────────────────────────────┘
           │
┌──────────┴───────────────────────────────────────────────────────┐
│ Gate 3: Evaluation & Testing                                     │
│  - Eval suite passes baseline                                    │
│  - Guardrails configured and tested                              │
│  - Red-teaming for HIGH risk systems                             │
│  - Bias/fairness testing for systems affecting individuals       │
└──────────┬───────────────────────────────────────────────────────┘
           │
┌──────────┴───────────────────────────────────────────────────────┐
│ Gate 4: Compliance Check                                         │
│  - Transparency requirements met                                 │
│  - Human oversight mechanisms in place                           │
│  - DPIA completed (if personal data processed)                   │
│  - Logging and monitoring configured                             │
│  - Acceptable use policy alignment verified                      │
└──────────┬───────────────────────────────────────────────────────┘
           │
┌──────────┴───────────────────────────────────────────────────────┐
│ Gate 5: Approval                                                 │
│  - MINIMAL: Engineering lead sign-off                            │
│  - LIMITED: Engineering lead + DPO sign-off                      │
│  - HIGH: Full governance board approval                          │
│  - Conformity assessment completed (HIGH only)                   │
│  - EU database registration (HIGH only)                          │
└──────────────────────────────────────────────────────────────────┘

Gate Depth by Risk Tier

Gate Minimal Risk Limited Risk High Risk
1. Classification Required Required Required
2. Documentation Optional model card Model card required Full documentation (Annex IV)
3. Eval & Testing Basic testing Eval suite + guardrails Eval suite + guardrails + red-team + bias testing
4. Compliance Inventory only Transparency + monitoring Full compliance check + DPIA + conformity assessment
5. Approval Eng lead Eng lead + DPO Full governance board

Step 4: AI Acceptable Use Policy

An internal policy defining how AI may and may not be used across the organization.

Template Sections

1. Scope Applies to all employees, contractors, and third parties using AI systems on behalf of the organization. Covers both AI systems the organization builds and third-party AI services used by employees.

2. Permitted Uses

  • AI-assisted customer support (with disclosure)
  • AI-generated content for product descriptions (with review)
  • Internal productivity tools (code assistance, summarization, translation)
  • Data analysis and reporting (non-decision-making)

3. Prohibited Uses (aligned with EU AI Act Article 5)

  • Social scoring of employees or customers
  • Emotion recognition in workplace settings (unless safety-critical)
  • Manipulative AI targeting customer vulnerabilities
  • Autonomous decisions with significant individual impact without human review
  • Processing special category data (health, biometric) without explicit legal basis

4. Conditions for Use

  • All customer-facing AI must include AI disclosure
  • No personal data entered into AI systems without DPO-approved data processing agreement
  • Third-party AI services must be vetted by security and legal
  • AI-generated outputs used in customer communications must be reviewed by a human before first use of each template/pattern
  • No confidential business data in consumer AI tools (ChatGPT, Claude.ai) without approved enterprise license

5. AI Literacy Requirements (EU AI Act, effective Feb 2025)

  • All employees using AI must complete AI literacy training
  • Training covers: how AI works (basic), limitations, risks, responsible use, reporting concerns
  • Refresher: annually

6. Incident Reporting

  • Report AI safety concerns, unexpected behavior, or compliance violations to the governance board
  • Escalation path: team lead -> AI platform team -> governance board -> legal

Step 5: Incident Response for AI

AI incidents differ from traditional software incidents.

Key AI-specific incident types:

  • Quality degradation – model outputs become less accurate, more hallucinations
  • Cost anomaly – token usage spikes, budget breached
  • Prompt injection – adversarial inputs manipulate AI behavior
  • Data leak – AI outputs contain PII, internal data, or system prompt
  • Compliance violation – missing disclosure, unauthorized decision-making
  • Bias incident – AI treats a demographic group unfairly

Each requires a specific response playbook.


Step 6: Compliance Monitoring

Governance is not a one-time exercise. Ongoing monitoring ensures continued compliance.

Activity Frequency Owner
AI inventory review Quarterly AI Platform Team
Risk classification reassessment Quarterly (or on change) Governance Board
Model card updates On model/prompt change + annually System owner
Transparency audit Quarterly Compliance + QA
Eval suite execution Weekly (automated) + monthly (manual) AI Platform Team
Production monitoring review Continuous (dashboards) + monthly (report) AI Platform Team
Governance board meeting Monthly Governance Board
Full compliance audit Annually Legal + External auditor
EU AI Act regulatory review Quarterly (track amendments/guidance) Legal

Audit Readiness

Maintain an evidence repository containing:

  • AI inventory (current)
  • Model cards and impact assessments (all versions)
  • Eval results and trend data
  • Guardrail trigger logs
  • Transparency implementation evidence (screenshots, test results)
  • Governance board meeting minutes and decisions
  • Training records (AI literacy)
  • Incident reports and post-mortems

This evidence supports both internal audits and regulatory inquiries.


References

This post is licensed under CC BY 4.0 by the author.