K
KnowMBAAdvisory
AI StrategyBeginner6 min read

AI Acceptable Use Policy

An AI Acceptable Use Policy is the short, plain-English document that tells employees what they can and cannot do with AI tools at work. The effective version is one page, written by a real human, and answers four questions: (1) What AI tools are approved? (2) What data can you put into them? (3) What outputs are you accountable for? (4) Where do you escalate when in doubt? The dysfunctional version is a 30-page legal document no employee reads, signed once at onboarding and never referenced. AUPs are operational documents, not compliance artifacts.

Also known asAI AUPAI Usage PolicyGenerative AI PolicyEmployee AI PolicyResponsible AI Use Policy

The Trap

The trap is writing the policy as a 'we prohibit everything' liability shield. 'Employees may not use any generative AI without written CISO approval.' This produces shadow AI — employees use ChatGPT on their phones because the official path is blocked. Surveys consistently show 60-75% of employees use AI tools at work; if your policy bans them, they're using them anyway, just without your visibility or controls. The opposite trap: 'use AI however you like.' This produces leaks of confidential data into vendor training pipelines and reputational incidents.

What to Do

Write a one-page policy in plain language with four sections: (1) Approved Tools — list the specific products and tiers (e.g., 'enterprise ChatGPT via SSO is approved; consumer ChatGPT.com is not'). (2) Data Tiers — what categories of data can go into which tools (public OK everywhere; internal OK in approved enterprise tools; confidential and restricted go nowhere external). (3) Accountability — 'you are responsible for AI outputs you publish, send, or commit.' (4) Escalation — named contact and channel for questions. Refresh quarterly. Pair with technical controls (DLP, browser gateways, SSO-only enterprise tiers) so the policy is enforceable.

Formula

AUP Effectiveness = (Clarity × Reach × Enforceability) − Shadow AI Workarounds Created

In Practice

Salesforce's AI Acceptable Use Policy, OpenAI's Usage Policies, Anthropic's Usage Policy, and Google's prohibited use policies all illustrate the genre — short, scoped to specific harms, and operationally actionable. Internally at enterprises, Microsoft, JPMorgan, and Samsung have all updated their AUPs after public incidents (Samsung's 2023 ChatGPT leak being canonical). The pattern across mature policies: short, specific, paired with enforcement, refreshed regularly, and owned by a named team.

Pro Tips

  • 01

    Approve at least one official enterprise tool. A policy that bans everything fails operationally. Employees need a path to use AI; if you don't provide it, they will create one. Approving one well-chosen enterprise tool reduces shadow AI by 70%+.

  • 02

    Write the data tier table as the centerpiece of the policy. The single most-asked employee question is 'can I put this in ChatGPT?' A clear tier table answers it without needing legal interpretation. Employees who know the answer don't ping legal; legal stays focused on edge cases.

  • 03

    Pair the AUP with a 30-minute mandatory training that includes 5 'real scenarios' employees might face. Policy + scenario training raises compliance dramatically vs. policy alone. Scenarios are sticky in a way prose is not.

Myth vs Reality

Myth

We don't need an AI AUP if we have an existing data security policy

Reality

Existing policies don't address AI-specific issues: prompt-injection risk, model output accountability, data-in-prompts, vendor training-on-data risk. Generic data policies leave employees guessing on AI questions, which produces shadow usage. AI-specific policy is needed.

Myth

Strict AUPs prevent leaks

Reality

Strict-without-alternative AUPs CAUSE leaks by pushing employees to unmonitored consumer tools. Samsung's 2023 incident occurred AFTER the company had restrictions in place — engineers used ChatGPT.com on personal accounts. Permissive policies with approved tools and DLP outperform strict policies with no path.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge — answer the challenge or try the live scenario.

🧪

Knowledge Check

A 5,000-employee company has banned all generative AI tools after a near-incident. After 6 months, what is the most likely actual state?

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets — not absolutes.

AI Acceptable Use Policy Maturity

Mid-to-large enterprises with knowledge worker populations

Mature

1-page policy + approved tools + data tier table + DLP enforcement + quarterly refresh + scenario training

Functional

Policy exists with approved tools but uneven enforcement

Permissive

Generic policy, no specific approved tools or data tiers

Prohibitionist (Shadow AI Engine)

Total ban, widespread shadow usage

Source: ISACA AI policy patterns + Samsung incident lessons + Salesforce AUP template

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

☁️

Salesforce AI Acceptable Use Policy

2023-present

success

Salesforce published a public AI Acceptable Use Policy that prohibits specific high-harm uses (weapons, disinformation, child sexual abuse material, election interference, certain biometric uses) while permitting normal commercial use. The policy is short, scoped to actual harms, and explicitly enforceable through contract terms and technical controls. Salesforce's policy has become a widely referenced model for vendor-side AUPs.

Length

Short, scoped, plain language

Prohibition Specificity

Named categories of harm

Enforcement

Contract terms + technical controls

A useful AUP names specific prohibitions and permits everything else. Vague catch-all prohibitions produce ambiguity and shadow workarounds; specific named prohibitions produce clear enforcement.

Source ↗
📱

Samsung ChatGPT Leak

2023

failure

Samsung engineers reportedly pasted confidential semiconductor source code and meeting transcripts into ChatGPT to debug and summarize. The data was potentially retained by OpenAI under terms applicable to consumer ChatGPT at the time. Samsung subsequently restricted generative AI usage for a period and accelerated its own internal AI tooling. The incident became canonical in AI governance discussions because it crystallized the real risk: not malicious leaks, but well-meaning employees pasting confidential data into convenient consumer tools.

Data Exposed

Source code + meeting transcripts

Cause

Consumer tool use without policy/controls

Response

Restriction + internal AI tooling investment

The AI leak risk is not bad actors — it is good employees with no approved alternative. AUPs paired with approved enterprise tooling prevent the Samsung scenario; bans alone do not.

Source ↗

Decision scenario

Drafting the Company AUP

You are CIO of a 4,000-employee professional services firm. A near-miss occurred last week: a partner pasted client-privileged material into consumer ChatGPT to draft a summary. Legal and the CEO want a policy operational within 30 days. The CFO wants minimal new tooling spend. The CHRO is concerned about employee productivity loss.

Headcount

4,000

Estimated Current AI Usage

~65% weekly

Approved Enterprise Tools

None

Recent Near-Miss Incidents

1 (this week)

CEO Mandate

Policy in 30 days

01

Decision 1

First decision: policy stance. Three drafts are on the table — total ban (CISO recommendation), permissive with approved enterprise tools (your recommendation), or no formal policy with case-by-case guidance (CFO preference).

Total ban — protect the firm from further leaks while a longer-term policy is developedReveal
Within 8 weeks an audit reveals 70% of partners are still using consumer ChatGPT on personal devices to handle work. The ban produces zero behavior change but creates 'plausible deniability' that prevents the firm from deploying any monitoring. A second leak occurs from an unmonitored personal account; this one becomes public. Reputational damage is severe. The ban is reversed under pressure.
Shadow AI Rate: 70% → 70% (unchanged)Visibility: NoneSubsequent Leaks: +1 (public)
Permissive policy with one approved enterprise tool (e.g., Microsoft Copilot or enterprise ChatGPT/Claude with no-train-on-data terms), data tier table, DLP on browsers, and mandatory 30-min trainingReveal
Within 60 days, ~80% of weekly AI usage is on the approved enterprise tool with full visibility, audit logs, and DLP controls. Shadow AI drops to ~15%. No further client-data incidents. CSAT survey shows 91% of employees prefer the new policy to the prior ambiguity. CFO approves the tooling spend after seeing the leak-prevention math.
Shadow AI Rate: 70% → ~15%Approved Tool Adoption: 0 → 80%Subsequent Material Leaks: 0
No formal policy — issue general guidance and rely on existing data security policyReveal
Six months later you have neither approved tools nor enforcement. Employee surveys show 73% are 'unsure what's allowed.' Legal calls a halt to all AI use after a third near-miss. The lack of policy created the worst of both worlds: no enablement and no protection.
Employee Clarity: LowNear-Misses: +2Eventual Outcome: Forced ban
02

Decision 2

Second decision: enforcement architecture. The policy is set; how do you make it stick?

Rely on policy text and annual training — trust employees to follow the rulesReveal
Adoption of approved tools climbs to ~50% but plateaus. ~30% continue using consumer tools out of habit. One quarter into the policy, a junior associate accidentally exposes a client name in a consumer tool. The incident is small but reveals the gap. Leadership questions the program.
Approved Tool Adoption: ~50% (plateau)Shadow AI Rate: ~30% persistent
Pair the policy with technical controls: SSO-only access to enterprise tool, browser DLP that blocks client-tagged content from being pasted into known consumer AI domains, and quarterly randomized policy refresher with real scenariosReveal
Technical controls catch what policy text cannot. Approved tool adoption reaches ~85% within 4 months. The DLP blocks ~120 paste attempts per month that would have leaked client data. Employees report the controls are 'invisible until you try to do something risky' — the right UX. Audit and regulators view the program favorably.
Approved Tool Adoption: ~85%DLP Blocks per Month: ~120Audit Readiness: Strong

Related concepts

Keep connecting.

The concepts that orbit this one — each one sharpens the others.

Beyond the concept

Turn AI Acceptable Use Policy into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h · No retainer required

Turn AI Acceptable Use Policy into a live operating decision.

Use AI Acceptable Use Policy as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.