K
KnowMBAAdvisory
AI StrategyIntermediate7 min read

AI Image Generation Policy

An AI image generation policy governs how a company creates, uses, and labels AI-generated images across marketing, product, internal communications, and customer experiences. The policy must answer five questions: (1) which models can be used (commercial license, training data provenance, indemnification); (2) what use cases are permitted (marketing campaigns, product mockups, stock replacement, customer-facing visuals); (3) what use cases are prohibited (real people without consent, sensitive demographics, factual events, deceptive imagery); (4) what disclosure or watermarking is required (C2PA content credentials, visible labels); and (5) who reviews and approves before publishing. The policy is increasingly a legal requirement: the EU AI Act mandates disclosure of AI-generated synthetic media, and copyright lawsuits (Getty v Stability AI, NYT v OpenAI) are reshaping the indemnification landscape.

Also known asGenerative Image PolicyVisual AI Use PolicyBrand Image Generation RulesSynthetic Media PolicyImage Provenance Policy

The Trap

The trap is letting marketing or social teams use whatever image generator is convenient (free tier of Stable Diffusion, Midjourney without commercial license, ChatGPT image gen) without provenance or rights review. The KnowMBA POV: image generation without provenance becomes brand liability. When Getty Images sued Stability AI for training on copyrighted images, every business using Stable Diffusion outputs commercially became a downstream risk. The same applies to Midjourney's training data lawsuits and the ongoing class actions. A policy that says 'use anything that looks good' becomes 'we ship a brand campaign and discover the model regenerated a competitor's copyrighted character' — pull the campaign, take the press hit, eat the legal cost.

What to Do

Ship a one-page policy with five rules. (1) Approved tools list: Adobe Firefly (trained on licensed content, commercial license, indemnification), OpenAI DALL-E (commercial use permitted, with content policy), Google Imagen (enterprise license terms), Microsoft Designer. Block consumer Midjourney and consumer Stable Diffusion for commercial output unless reviewed. (2) Always embed C2PA Content Credentials when the tool supports them (Firefly does natively). (3) Prohibit generation of real, identifiable people without explicit consent and recognizable IP without rights review. (4) Require visible disclosure label on customer-facing AI imagery (e.g., 'AI-generated') in markets where regulation applies (EU). (5) Pre-publish review by brand + legal for any external campaign. Audit usage quarterly.

Formula

Image Risk Score = (Use Case Sensitivity × Model Provenance Risk) − (Disclosure + Approval + Indemnification Coverage)

In Practice

Adobe launched Firefly in 2023 specifically positioned as commercially safe — trained on Adobe Stock + public domain + licensed content, with indemnification for enterprise customers and native C2PA Content Credentials embedded in every output. By 2025, Firefly had been used to generate over 20 billion images, becoming the default for risk-averse enterprise marketing teams. Getty Images filed multi-jurisdictional lawsuits against Stability AI in 2023 alleging Stable Diffusion was trained on millions of Getty's copyrighted images without license — the case became a reference point for downstream risk. The pattern: enterprises that adopted a commercially-safe tool with provenance rolled out broadly; those that used scraped-data models faced rollback when legal got involved.

Pro Tips

  • 01

    C2PA Content Credentials (the open standard backed by Adobe, Microsoft, BBC, Sony, Nikon) embed cryptographic provenance in image files showing how they were created and edited. Firefly outputs them natively. Building C2PA into your publishing workflow now is much cheaper than retrofitting later when EU AI Act enforcement tightens.

  • 02

    Indemnification is the line that separates enterprise from consumer image generators. Adobe (Firefly), Microsoft (Designer/Copilot), Google (Imagen on Vertex AI), OpenAI (Enterprise) all offer some indemnification for IP claims arising from their generated outputs. Consumer tools generally do not. Your legal team should require indemnification for any tool used commercially.

  • 03

    Watermarking is necessary but not sufficient. Visible labels on customer-facing imagery, internal logging of which model and prompt produced each asset, and the ability to revoke and replace a generated image after the fact are all part of a credible policy. Policy without enforcement is the same as no policy.

Myth vs Reality

Myth

If the AI generates it, you own it

Reality

Copyright on AI-generated images is contested in most jurisdictions. The US Copyright Office has held that purely AI-generated images are not copyrightable; only the human-authored elements get protection. This affects whether you can stop competitors from copying your AI-generated brand assets — likely you can't.

Myth

Watermarking solves the disclosure problem

Reality

Most AI watermarks (visible or invisible) can be stripped or are not preserved through screenshots, format conversion, and re-edits. The realistic disclosure system is policy + visible labels at publish time + provenance in the file format (C2PA). Don't rely on watermarks alone for compliance.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge — answer the challenge or try the live scenario.

🧪

Knowledge Check

Your marketing team wants to use a free, scraped-data image model to generate the visuals for a major product launch campaign. Brand and legal review aren't in the workflow. Which of these is the largest immediate risk?

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets — not absolutes.

% of Commercial AI Image Output on Indemnified, Provenance-Aware Tools

Enterprise marketing and customer-facing image generation

Best Practice

100%

Acceptable

85-99%

Elevated Risk

60-85%

Open Liability

< 60%

Source: Hypothetical: synthesized from C2PA adoption data and enterprise procurement guidance

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

🔥

Adobe Firefly

2023-2026

success

Adobe launched Firefly with a deliberate position: commercially safe AI image generation. Trained on Adobe Stock, public domain, and openly licensed content, Firefly came with enterprise indemnification and native C2PA Content Credentials. Within two years, Firefly was generating over 20 billion images, becoming the default choice for enterprises whose legal teams refused to clear scraped-data models. Adobe's bet: when a market is being reshaped by litigation risk, the safe choice wins enterprise share even if the consumer 'magic' is debatably better elsewhere.

Reported Generations

20B+ images by 2025

Indemnification

Enterprise tier

Provenance Standard

C2PA Content Credentials native

When the regulatory and litigation environment is uncertain, indemnification + provenance is the enterprise moat. Adobe didn't win on best-in-class image quality; they won on being the choice that wouldn't get the GC fired.

Source ↗
⚖️

Getty Images v Stability AI

2023-2026

mixed

Getty Images filed lawsuits in the US and UK alleging that Stability AI trained Stable Diffusion on millions of Getty's copyrighted images without license. The case became a reference point for downstream risk: enterprises using Stable Diffusion outputs commercially faced uncertain copyright exposure depending on how the litigation resolved. Several enterprise marketing teams that had standardized on open-source Stable Diffusion publicly reverted to indemnified tools (Firefly, DALL-E Enterprise) after legal review. The litigation reshaped the enterprise image-gen vendor landscape.

Plaintiff

Getty Images

Allegation

Training on unlicensed copyrighted images

Enterprise Impact

Migration to indemnified tools

Litigation shapes vendor selection faster than benchmarks do. When a model's training data is in dispute, downstream commercial users carry the risk. Pick tools where the vendor takes that risk on themselves.

Source ↗

Decision scenario

Scaling AI Image Generation Without Brand Liability

You're VP Brand at a global consumer products company. Marketing wants to use AI for 60% of digital creative across 25 markets — millions of generated images per year. Your CMO wants speed; your General Counsel wants safety. EU AI Act disclosure obligations are 6 months out.

Annual Image Volume Target

~1.5M images

Current Stock / Production Cost

~$28M/year

Markets in Scope

25

EU AI Act Compliance Deadline

6 months

01

Decision 1

The growth team has a working prototype on a consumer Midjourney plan. The image quality is excellent. The cost is low. Your General Counsel hasn't reviewed it. Switching to Adobe Firefly + Vertex AI Imagen would slip the rollout 6 weeks but provide indemnification, C2PA provenance, and EU disclosure readiness.

Approve the Midjourney rollout. Loop in legal in parallel and switch later if needed.Reveal
Rollout starts strong. Within 5 months, three issues land: (1) a campaign image too closely resembles a well-known illustrator's style; cease-and-desist + settlement = $400K + pulled campaign. (2) EU regulator inquiry on missing AI disclosure across two campaigns; goodwill cost + remediation. (3) Migration to indemnified tool happens anyway under crisis pressure, costing 3× what a planned migration would have. Total cost of the 'speed' choice: meaningfully higher than the 6-week delay would have been, plus durable damage to the legal-marketing relationship.
Settlements / Fines: $400K+Crisis Migration Cost: 3× planned migrationLegal Trust: Damaged
Pause 6 weeks. Run tool selection: Firefly for bulk production (indemnified, native C2PA), Imagen on Vertex AI for hero shots, Designer for internal use. Add C2PA into publishing pipeline. Brand + legal pre-publish review for external campaigns.Reveal
Rollout launches at month 4 with full indemnification, EU disclosure compliance, and a documented review workflow. Volume scales to 1.2M images in year 1 with zero IP incidents and zero regulatory issues. The pre-publish review catches a competitor-resembling image before publish in month 7. Total program cost ~$2.5M (tool licenses + workflow build) against $25M+ savings vs stock production. Legal becomes a partner, not a blocker.
Year-1 Net Savings: $22M+IP / Regulatory Incidents: 0Legal Partnership: Strong

Related concepts

Keep connecting.

The concepts that orbit this one — each one sharpens the others.

Beyond the concept

Turn AI Image Generation Policy into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h · No retainer required

Turn AI Image Generation Policy into a live operating decision.

Use AI Image Generation Policy as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.