K
KnowMBAAdvisory
AI StrategyIntermediate7 min read

AI Compliance Mapping

AI compliance mapping is the inventory that connects your AI use cases to the specific regulatory obligations that apply to each. It is a matrix: rows are use cases (e.g., resume-screening AI in EU, customer support chatbot in California), columns are regimes (EU AI Act, GDPR, CCPA, NYC Local Law 144, sector rules like HIPAA or FCRA), and cells contain the obligation and current compliance status. Without this map, every regulatory change triggers a panicked all-hands audit; with it, you know in 30 minutes which features are affected.

Also known asAI Regulatory MappingAI Compliance MatrixAI Obligation RegisterRegulatory CrosswalkAI Compliance Inventory

The Trap

The trap is treating compliance mapping as a one-time consulting deliverable โ€” a 200-page PDF produced by an outside firm and immediately stale. Six months later, no one knows which features have changed, which regulations have updated, or which new use cases have launched. The map only earns its keep if it lives inside the AI use case approval workflow, gets updated every time a use case ships or a regulation changes, and is owned by a named person โ€” not 'compliance' as an abstract function.

What to Do

Build a living matrix in your governance tool of choice (a spreadsheet works; a GRC platform is better at scale). For each AI use case, record: (1) jurisdictions of users affected, (2) data categories processed, (3) decision impact (informational vs consequential), (4) applicable regimes, (5) specific obligations, and (6) compliance status (compliant / gap / not-yet-applicable). Tie updates to the use case approval workflow โ€” a use case cannot launch without a row in the matrix. Refresh quarterly against a regulatory tracker.

Formula

Compliance Coverage = (Use Cases with Complete Mapping) รท (Total AI Use Cases in Production) โ€” Target โ‰ฅ 100%

In Practice

The EU AI Act classifies AI systems into risk tiers (prohibited, high-risk, limited-risk, minimal-risk) with specific obligations per tier. The NIST AI Risk Management Framework provides a US-side voluntary mapping. NYC Local Law 144 requires bias audits for automated employment decision tools. Companies operating across these regimes maintain compliance maps that show, per use case, which obligations apply โ€” and the gaps between them. Salesforce, Microsoft, and IBM each publish how they map their AI offerings against these frameworks.

Pro Tips

  • 01

    The most valuable column in your matrix is 'jurisdiction of affected users,' not 'jurisdiction of company HQ.' Most AI regulations apply where the user is, not where the company is. A US-headquartered company serving EU users is bound by the EU AI Act regardless of where the engineering team sits.

  • 02

    Track regulations in three states: in-force, adopted-not-yet-in-force, and proposed. The EU AI Act took years from proposal to enforcement; teams that started mapping at the proposal stage had years of lead time. Teams that waited for enforcement had months.

  • 03

    Use cases that span multiple jurisdictions should be designed to the strictest applicable standard, then loosened by region only when clearly safe. The reverse โ€” designing to the loosest standard and patching for stricter ones โ€” creates a permanent backlog of regional exceptions and bugs.

Myth vs Reality

Myth

โ€œWe're a US company so we only need to worry about US regulationsโ€

Reality

If any of your users are in the EU, UK, Brazil, China, or California, you are subject to those jurisdictions' AI and data protection rules. Most modern AI regulations are extraterritorial. The 'we're US-only' belief is the single most expensive misreading of regulatory exposure I see in mid-market firms.

Myth

โ€œAI regulation is too unsettled to map now โ€” we should waitโ€

Reality

By the time the regulation is settled, you have 6-18 months to comply. Building the map is the slow part โ€” the EU AI Act mapping takes a quarter to do well. Doing it under enforcement pressure costs 3-5x more and you'll miss obligations. Start now with the regulations that already exist and update as new ones land.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge โ€” answer the challenge or try the live scenario.

๐Ÿงช

Knowledge Check

A US-headquartered SaaS company offers an AI resume-screening feature available globally to enterprise customers. Which compliance regimes most likely apply to this single feature?

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets โ€” not absolutes.

AI Compliance Mapping Maturity

Enterprises with AI features in production across multiple jurisdictions

Mature

100% of production use cases mapped, quarterly refresh, integrated with launch process

Functional

70-99% mapped, ad hoc refresh

Partial

40-69% mapped, no defined refresh cadence

Weak

<40% mapped, mapping is reactive to incidents

Source: EU AI Act + NIST AI RMF + IAPP AI Governance benchmarks

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

๐Ÿ‡ช๐Ÿ‡บ

EU AI Act

2024 (adopted) โ€” 2026 (full enforcement)

mixed

The EU AI Act introduced the first comprehensive horizontal AI regulation, classifying AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories. High-risk systems (employment, credit, education, law enforcement, biometrics, critical infrastructure) face conformity assessment, technical documentation, post-market monitoring, and human oversight requirements. Penalties reach up to 7% of global annual turnover or โ‚ฌ35M, whichever is higher. Companies serving any EU users must map.

Maximum Fine

7% of global turnover or โ‚ฌ35M

Risk Tiers

4 (prohibited, high, limited, minimal)

Extraterritorial Reach

Yes โ€” applies to non-EU providers serving EU users

The mapping exercise is the defense. Without a map showing which use cases fall in which tier and what mitigations are in place, every regulator inquiry becomes a months-long discovery exercise.

Source โ†—
๐Ÿ‡บ๐Ÿ‡ธ

NIST AI Risk Management Framework

2023-present

success

The NIST AI RMF is a voluntary US framework structured around four functions: Govern, Map, Measure, Manage. The 'Map' function specifically calls for inventorying AI systems and contexts of use. While voluntary, NIST AI RMF has become the de facto US baseline โ€” federal procurement and many enterprise vendor questionnaires now reference it. Companies that adopted NIST AI RMF mapping had a head-start on EU AI Act mapping because the structures are largely compatible.

Framework Functions

Govern, Map, Measure, Manage

Status

Voluntary but de facto standard

Compatibility

Crosswalks to EU AI Act, ISO 42001

Adopt a structured framework even when not legally required โ€” it dramatically lowers the cost of complying with the next regulation that lands.

Source โ†—
๐Ÿข

Hypothetical: HR Tech Vendor & NYC Local Law 144

Composite scenario

failure

A 200-employee HR tech vendor sold AI-powered hiring tools to NYC employers but had not mapped against NYC Local Law 144 (effective July 2023, requiring bias audits and candidate notice for Automated Employment Decision Tools). Three customers received complaints; one became a class action. The vendor had to retrofit bias audits, candidate disclosures, and audit publication for 40+ customers under enforcement pressure. Legal and engineering cost: $1.8M. Customer churn from the incident: ~15% of NYC book of business. A pre-launch compliance map would have flagged the law 18 months before enforcement.

Pre-Launch Mapping

None for NYC LL 144

Retrofit Cost

~$1.8M

Customer Churn

~15% of NYC book

Mapping is not paperwork; it is the early warning system that prevents the cost of retrofitting under enforcement pressure. The cost of mapping proactively is 10-20% of the cost of mapping reactively.

Related concepts

Keep connecting.

The concepts that orbit this one โ€” each one sharpens the others.

Beyond the concept

Turn AI Compliance Mapping into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h ยท No retainer required

Turn AI Compliance Mapping into a live operating decision.

Use AI Compliance Mapping as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.