K
KnowMBAAdvisory
AI StrategyIntermediate6 min read

AI Team Structure

AI team structure is the organizational pattern you use to staff and govern AI work — centralized lab, embedded squads, hub-and-spoke, or platform team. The choice affects velocity, leverage, consistency, and where AI cost+risk lives. Centralized labs (DeepMind-style) deliver depth on hard research problems but ship slowly into product. Embedded squads (one ML/AI engineer per product team) ship fast but duplicate infrastructure and inconsistent practices. Hub-and-spoke (a central AI platform team + embedded specialists) is the most common pattern at companies past ~50 engineers because it captures both leverage (shared infra, governance, evaluation) and product proximity (squads own use-case fit). The structure should follow the AI maturity stage and the product type — not the other way around.

Also known asAI Org DesignML Team StructureAI Staffing ModelAI Team Topology

The Trap

The trap is copying a structure from a company at a different maturity stage. A 30-person startup hiring a 'Head of AI Research' to build a centralized lab will produce papers, not products. A 5,000-person enterprise scattering one ML engineer into every product team produces 40 reinventions of the same RAG pipeline. The opposite trap is reorgs-as-strategy: changing the team chart every six months while the actual problem is unclear ownership of evaluation, cost, and governance. Most 'AI org problems' are actually missing-roles problems — no one owns evals, no one owns cost, no one owns model lifecycle.

What to Do

Stage your structure to your maturity. Stage 1 (exploration, <100 engineers): 1-3 generalist AI engineers embedded in the most-impacted product team. Stage 2 (multiple AI features in production): hub-and-spoke — small platform team (3-6 people) owning shared infra, evals, governance, model registry; embedded AI engineers in product squads owning use-case fit. Stage 3 (AI-central to product): platform team grows to 10-20, formal Head of AI, dedicated evaluation and safety functions. At every stage, name owners for: evals, inference cost, model lifecycle, AI security/safety. Reorg only when the org chart blocks shipping — not on schedule.

Formula

Structure Fit = (AI maturity stage) × (Product type) × (Org size); revisit when shipping velocity drops or duplicated work appears

In Practice

Meta, Google, and Microsoft all evolved through the same arc: early dedicated AI labs (FAIR, Google Brain, MSR AI) producing research, followed by integration into product divisions as AI moved from research to product layer. Stripe, Airbnb, and Shopify use hub-and-spoke: a central ML platform team + embedded ML engineers in product. Anthropic, OpenAI, and DeepMind retain research-heavy structures because their product IS the model. The structure mirrors the company's AI value-chain position: research-as-product → centralized; AI-as-feature → embedded; AI-as-platform-and-feature → hub-and-spoke.

Pro Tips

  • 01

    Before reorganizing, ask: 'Which roles are missing?' 80% of 'org problems' are actually role gaps — usually no one explicitly owns evaluation, inference cost, or post-deployment monitoring. Add the role before redrawing boxes.

  • 02

    A platform team with no embedded counterparts in product squads becomes an ivory tower; embedded engineers with no platform team become 40 reinventions of the same wheel. Hub-and-spoke needs both halves staffed or it fails.

  • 03

    Don't hire a Head of AI before you have 5+ AI engineers reporting up. The role becomes a one-person 'AI team' that nobody listens to because there's no one to lead.

Myth vs Reality

Myth

Every company needs a centralized AI Center of Excellence

Reality

An AI CoE makes sense at >200 engineers with multiple product lines using AI. At smaller scale, a CoE is overhead without leverage — embed AI engineers directly into the teams shipping AI features. The CoE pattern is borrowed from enterprise IT and doesn't translate to small product orgs.

Myth

Structure follows talent — hire the people first, organize them later

Reality

Structure should be designed BEFORE hiring at scale, because the structure determines who you can recruit. A great applied AI engineer won't take a role with no platform support; a great researcher won't take a role embedded in a product squad with no research mandate. The chart frames the offer.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge — answer the challenge or try the live scenario.

🧪

Knowledge Check

You're a 200-engineer SaaS company with 4 AI features in production across 3 product squads. Each squad built its own evaluation harness, its own prompt management, and its own RAG pipeline. Inference cost varies wildly between squads. What's the right next move?

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets — not absolutes.

AI Team Structure by Company Stage

Common patterns observed across product-AI organizations; varies by AI-centrality of product

Pre-product-market-fit (<30 eng)

1-2 embedded AI engineers

Scaling startup (30-200 eng)

Embedded + guild for sharing

Mid-market (200-1000 eng)

Hub-and-spoke (platform + embedded)

Enterprise (1000-5000 eng)

Platform + CoE + dedicated evals/safety

Hyperscaler / AI-native (5000+ eng)

Multi-platform + research arm + applied

Source: Aggregated industry observation; verify against your stage and product type

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

💳

Stripe ML Platform Team (industry pattern)

2020-2026

success

Stripe evolved its ML org through the standard arc: early embedded ML engineers in fraud and risk; growth into a centralized ML platform team owning shared infra (feature stores, model serving, evaluation); embedded ML specialists in product squads owning fraud, payments, and growth use cases. The platform team owns the substrate; product squads own the use cases. Public engineering blog posts from Stripe describe the model registry, feature store, and evaluation harness as platform-level, with feature engineering and model selection embedded.

Pattern

Hub-and-spoke

Platform Team Owns

Feature store, serving, eval, registry

Embedded Engineers Own

Use-case fit, feature engineering, business metrics

Outcome

Consistent infra + product velocity

The hub-and-spoke pattern at scale is the dominant structure for product-AI organizations because it solves both leverage and proximity. Centralized-only or embedded-only patterns optimize for one and starve the other.

Source ↗
🚀

Hypothetical: 80-Engineer Scaling Startup

2025

failure

Hypothetical: An 80-engineer B2B startup hired a 'VP of AI' to centralize all AI work. Within 9 months: AI engineers were detached from product squads, AI features shipped 3x slower than before, product squads built shadow AI tools to bypass the central team, and the VP of AI departed. The new structure: dissolve the central team, embed 1 AI engineer per product squad, retain 2 platform engineers for shared infra. Velocity recovered within a quarter.

Pre-Reorg AI Velocity

Baseline

Post-Centralization

~3x slower shipping

Shadow AI Tools Built by Squads

Multiple

Time to Recover After Re-Embed

~1 quarter

Hypothetical: Centralizing too early — before the company has the AI volume to justify a platform layer — destroys velocity without producing leverage. Structure should follow scale, not aspiration.

Related concepts

Keep connecting.

The concepts that orbit this one — each one sharpens the others.

Beyond the concept

Turn AI Team Structure into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h · No retainer required

Turn AI Team Structure into a live operating decision.

Use AI Team Structure as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.