K
KnowMBAAdvisory
AI StrategyIntermediate7 min read

AI Use Case Selection

AI use case selection is the discipline of choosing which problems in your business actually deserve AI investment โ€” and which are vanity projects masquerading as innovation. The right framework scores candidate use cases on two axes: business value (revenue lift, cost reduction, risk reduction) and technical feasibility (data availability, model maturity, integration complexity). McKinsey found that 70% of enterprise AI projects fail to deliver value, and the #1 cause is selecting use cases with weak ROI math. The winning portfolio mixes 60-70% near-term efficiency plays (where AI augments existing workflows), 20-30% revenue-generating use cases, and 10% exploratory bets. If you cannot articulate the dollar value of a use case in one sentence, do not fund it.

Also known asAI Use Case PrioritizationAI Opportunity ScoringAI Portfolio SelectionWhere to Apply AI

The Trap

The trap is starting with 'we should use AI' instead of 'we have a problem that AI happens to solve.' Founders read a Bloomberg headline about Klarna replacing 700 agents and immediately commission a chatbot โ€” without checking whether their support volume even justifies the build cost. The second trap is selecting flashy use cases (image generation, autonomous agents) over boring high-value ones (invoice extraction, lead scoring, churn prediction). The boring ones win because the data is clean, the value is measurable, and incumbents already have proven playbooks. Finally, teams confuse 'AI-feasible' with 'AI-valuable' โ€” just because GPT-4 CAN draft your emails does not mean drafting emails is your bottleneck.

What to Do

Run a structured intake every quarter: (1) Have each function submit 3-5 candidate use cases with a 1-page brief โ€” problem, current cost, expected lift, data sources. (2) Score each on Value (1-10) and Feasibility (1-10) using a fixed rubric. (3) Plot on a 2x2 matrix. Fund the top-right quadrant aggressively, pilot the top-left (high value, harder), kill the bottom half. (4) Set a $50K-$150K budget cap on every pilot with a 90-day kill date. (5) Require every funded use case to declare its baseline metric and target lift BEFORE the build starts.

Formula

Use Case Score = (Business Value ร— 0.6) + (Technical Feasibility ร— 0.4); fund if score โ‰ฅ 7.0

In Practice

JPMorgan's COIN (Contract Intelligence) platform is the textbook example of disciplined use case selection. Instead of chasing AI moonshots, they targeted commercial loan agreement review โ€” a process that consumed 360,000 lawyer-hours per year. The use case scored high on value (clear $/hour cost), high on feasibility (structured documents, repeatable patterns, abundant labeled examples), and had executive sponsorship. After deployment, COIN reviewed in seconds what previously took 360,000 hours annually. They picked a boring, high-value use case while competitors were still building chatbots.

Pro Tips

  • 01

    Apply the 'Cost-Per-Decision' lens: count how many times a decision is made per month and the cost of each one. AI is most valuable in high-frequency, medium-stakes decisions (loan approvals, fraud flags, content moderation) โ€” not in low-frequency, high-stakes decisions (M&A, hiring an exec) where humans should stay in the loop.

  • 02

    Avoid the 'AI tax' on greenfield use cases. If your team has never shipped an ML system, your first project should NOT be a custom RAG pipeline. Start with a vendor tool (Glean, Cresta, Harvey) on a contained workflow, then graduate to custom builds once you've earned the operational chops.

  • 03

    The best use cases sit on top of a process you've already mapped. If you cannot draw the current workflow on a whiteboard in 5 minutes, AI will not magically fix it โ€” you have a process problem, not an AI problem.

Myth vs Reality

Myth

โ€œIf a use case is technically feasible with current models, we should pursue itโ€

Reality

Feasibility is necessary but not sufficient. The real filter is whether the value of automation exceeds the total cost of ownership โ€” including model inference, data prep, integration, change management, and ongoing maintenance. A feasible use case with a 6-year payback is a worse investment than a boring SaaS subscription.

Myth

โ€œAI use cases should generate new revenue, not cut costsโ€

Reality

Cost-reduction use cases dominate the actual ROI ledger of enterprise AI. Bain's 2024 survey found that 67% of measured AI value comes from efficiency gains in existing workflows. Revenue-generation use cases are sexier in board decks but have longer payback and higher failure rates. Stack your portfolio toward boring efficiency wins.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge โ€” answer the challenge or try the live scenario.

๐Ÿงช

Knowledge Check

Your CTO wants to invest in three AI initiatives: (A) GenAI sales-call coach scoring 200 daily reps, (B) computer-vision quality inspection on a $400/unit defect, replacing 4 inspectors at $80K each, (C) a custom LLM that drafts internal memos for execs. Which should you fund FIRST?

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets โ€” not absolutes.

Enterprise AI Use Case Outcomes (by value tier at funding time)

Synthesis of McKinsey, Bain, and BCG AI surveys 2023-2024

Hard $ baseline + sponsor

~70% deliver measured value

Soft baseline (CSAT, NPS)

~30% deliver measured value

No baseline, strategic narrative only

<10% deliver measured value

Source: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

๐Ÿฆ

JPMorgan Chase (COIN)

2017-present

success

JPMorgan deployed COIN (Contract Intelligence) to extract data from commercial credit agreements. The team rejected sexier use cases (algorithmic trading, GenAI advisors) in favor of a high-volume, structured-document problem with a clean labor baseline. Result: a process that consumed 360,000 lawyer-hours annually now runs in seconds, with higher accuracy than human reviewers on the targeted clauses.

Annual Hours Eliminated

360,000

Document Processing Time

Hours โ†’ seconds

Use Case Profile

High-volume, structured, labor-bound

Initial Scope

One contract type, then expand

The best AI use cases are often the most boring ones: high-frequency, well-defined inputs, clear cost baselines. JPMorgan won by being disciplined about what NOT to build.

Source โ†—
๐Ÿ“‹

Hypothetical: Mid-market Insurance Carrier

2024

mixed

Hypothetical: A 1,200-person insurance carrier funded six AI projects in parallel after a board AI mandate, with no baseline metrics required. 18 months later, four projects had been quietly shut down (chatbot, claims summarizer, agent recommender, marketing copy generator), one was stuck in pilot purgatory, and one (fraud anomaly detection on a 15-year-old rules engine) delivered $3.2M in prevented payouts. The CIO's lesson: the four killed projects shared one trait โ€” no measurable baseline at funding time.

Projects Funded

6

Projects Killed

4

Successful Project Baseline

$/fraudulent claim, audited

Killed Projects Baseline

None at funding

Portfolio approaches fail when there is no scoring discipline. One use case with a hard baseline outperformed five 'strategic' bets combined.

Decision scenario

The AI Portfolio Allocation

You are CIO of a $400M consumer goods company. The CEO has approved a $1.2M AI budget and wants three projects funded. Your team has surfaced eight candidate use cases ranging from $80K invoice extraction to a $700K demand-forecasting overhaul.

AI Budget

$1.2M

Candidate Use Cases

8

Funded Projects Slots

3

Operational AI Maturity

Low (no production ML yet)

01

Decision 1

Your top two scored use cases are: (1) Demand forecasting overhaul โ€” $700K, projected $2.4M annual margin lift, but requires a new data pipeline and your team has never shipped ML to production. (2) Invoice extraction with a vendor tool โ€” $90K, projected $400K annual savings, vendor has reference customers. With one slot remaining after these two, do you fund the third based on dollar value alone, or balance the portfolio?

Fund the demand forecasting + invoice extraction + the highest-dollar remaining use case (a $300K marketing personalization project). Maximize upside.Reveal
12 months in: invoice extraction is live and saving $33K/month โ€” the only win. Demand forecasting is 60% over budget because the data pipeline took longer than estimated, and the team is fighting model drift they're not equipped to handle. Marketing personalization is in pilot with no measured lift. The CEO is asking why $1.1M of the $1.2M is producing nothing yet. Future AI funding is in jeopardy.
Wins in Year 1: 1 of 3Operational Strain: SevereFuture AI Funding: At risk
Fund invoice extraction + a second vendor-tool win (lead scoring, $120K) + the demand forecasting at half-scope (one product line only, $350K). Build operational muscle before betting big.Reveal
12 months in: both vendor projects are live, saving $620K combined. The half-scope demand forecasting hit its target on the pilot product line, and your team learned the operational craft on a contained surface area. You go to the board with three measured wins, request $2M for next year, and now have the credibility to scale demand forecasting to all product lines with confidence.
Wins in Year 1: 3 of 3Operational Maturity: Low โ†’ MediumYear-2 AI Budget: $1.2M โ†’ $2M+ approved

Related concepts

Keep connecting.

The concepts that orbit this one โ€” each one sharpens the others.

Beyond the concept

Turn AI Use Case Selection into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h ยท No retainer required

Turn AI Use Case Selection into a live operating decision.

Use AI Use Case Selection as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.