Skip to main content
FIELD REPORT · AI

Building an AI Center of Excellence: Structure, Charter, and Operating Model

A practical guide to standing up an AI CoE: operating models, charter template, staffing, funding, and when a CoE is the wrong answer.

PUBLISHED
May 10, 2026
READ TIME
10 MIN
AUTHOR
ONE FREQUENCY

An AI Center of Excellence is not a department. It is a forcing function for shared standards. When it works, it compounds. When it fails, it becomes the third bottleneck your business units route around. The difference is almost always in the charter, not the talent.

This is the playbook we use when a client says "we need to stand up an AI CoE." Sometimes the answer is yes, here is how. Sometimes the answer is no, you need something else. Both responses are common.

When a CoE is the wrong answer

Skip the CoE if you are under 500 employees. You do not have enough surface area to justify a dedicated team. A two-person AI working group reporting to the CTO is sufficient and far cheaper.

Skip the CoE if you are already a data-mature organization with embedded ML teams in every business unit. You are not centralizing; you are duplicating. What you need is a federated council with quarterly cadence, not a new org box.

Skip the CoE if the real problem is executive alignment. A CoE cannot substitute for a CEO who will not pick the top three priorities. You will burn 18 months and a Director of AI before that becomes obvious.

If you are still reading, you probably do need one. The rest of this article assumes a 2,000 to 50,000 employee organization where AI investment has crossed eight figures and use cases are sprouting in three or more business units without coordination.

Three operating models

There is no neutral choice here. Each model has structural consequences.

Centralized

All AI work happens in the CoE. Business units submit requests. Pros: consistent quality, single budget, easier governance. Cons: bottleneck within 18 months, business units feel disempowered, the CoE becomes a "no factory."

Use when: regulatory load is high (banks, federal, healthcare), data is concentrated, AI maturity is low across the business.

Federated

Each business unit owns its AI capability. The CoE publishes standards, runs the platform, and adjudicates risk. Pros: speed, business ownership, scales naturally. Cons: inconsistent quality, harder to enforce standards, redundant tooling spend.

Use when: business units are already mature and well-funded, regulatory posture is moderate, AI is a competitive differentiator inside each unit.

Hub and spoke

The CoE owns the platform, governance, and shared services. Embedded AI leads sit inside each business unit on a dotted line to the CoE. Pros: balance of speed and standards, clear career path for AI talent, shared infrastructure. Cons: hardest to execute, dotted-line authority creates friction.

Use when: you are a multi-business unit enterprise, you have at least three priority business units, and you have a CIO or CDAO with real cross-business unit authority.

For most clients in the 2,000 to 50,000 employee range, hub and spoke is the right answer. It is also the hardest to set up correctly.

The CoE charter

The charter is one to three pages. If it is longer, you have not done the work to compress it. Every charter needs five sections.

1. Mission

One sentence. Example: "Accelerate measurable business outcomes from AI by providing shared platform, governance, and expertise that business units cannot economically build alone."

If your mission includes the word "innovation," rewrite it. Innovation is not a mission; it is a side effect.

2. Scope

What the CoE does and does not do. Explicit. Example in scope: model governance, platform operations, foundational training, pilot acceleration, vendor management for AI tooling. Example out of scope: data engineering for individual business unit pipelines, application development beyond pilots, business process redesign.

Out of scope is more important than in scope. It is what business units will try to push to you.

3. Decision rights

This is where 80 percent of charters fail. Use a simple RACI grid. Sample decisions:

| Decision | CoE | BU | Steering Committee | |----------|-----|-----|--------------------| | Approved foundation model list | A | C | I | | Use case prioritization within a BU | C | A | I | | New vendor over $250K | R | C | A | | Model deployment to production | A | R | I | | Policy exceptions | R | C | A | | Inference budget per BU | C | A | I |

If you cannot fill in this grid in week one, you do not have a CoE; you have a working group.

4. Success metrics

Three to five metrics. No more. Examples that work:

  1. Number of production AI use cases with documented dollar impact (target: 12 in year one)
  2. Aggregate financial impact across portfolio (target: $8M annualized by end of year one)
  3. Time from approved use case to production (target: under 12 weeks median)
  4. Platform uptime and cost per 1M tokens (target: 99.5%, decreasing 15% YoY)
  5. AI literacy score among 5,000 most relevant employees (target: 70% pass on standardized assessment)

Avoid: training hours delivered, pilots launched, models evaluated. These are activity, not outcomes.

5. Funding model

Decide whether you are general-ledger funded, chargeback funded, or hybrid. General ledger removes friction but creates moral hazard (business units treat AI as free). Pure chargeback creates discipline but slows experimentation. Hybrid is the right answer for most: GL funds the platform and governance, chargeback funds inference and bespoke build.

Staffing the CoE

Year one staffing for a hub-and-spoke CoE in a 10,000-person enterprise typically runs 12 to 20 FTEs. The composition matters more than the count.

  • Head of AI / CoE Director (1). Reports to CIO, CDAO, or COO. Must have line operating experience, not just technical depth.
  • AI Product Managers (2 to 4). Own use case portfolios, translate business asks into technical scope. The scarcest hire in this list.
  • ML Engineers (3 to 5). Build, fine-tune, deploy. Not data scientists; engineers who ship.
  • MLOps / Platform Engineers (2 to 4). Own the gateway, observability, eval harness, vector stores.
  • AI Governance and Risk Lead (1). Owns the risk register, policy, regulatory engagement. Often a recovering compliance or legal professional.
  • Applied AI / Prompt Engineers (2 to 3). The people who actually make the model work on the use case. Underrated and underpaid.
  • Data Engineer for AI (1 to 2). Owns the pipelines that feed RAG and fine-tuning workloads.

Skip the "Chief AI Officer" title unless the CEO is genuinely making AI the company strategy. The title sets expectations the organization is not ready to meet. A Head of AI or VP of AI does the same job with less ceremony.

Engagement model with business units

Publish a one-page engagement model. It answers: how does a business unit get help? Three tiers usually work.

Tier 1: Self-serve. The platform is available, the eval harness runs, the docs are good. BU teams can build inside the rails without CoE involvement. Target: 60% of activity.

Tier 2: Co-build. The CoE provides an embedded engineer or PM for 8 to 12 weeks to accelerate a specific use case. BU funds the embed. Target: 30% of activity.

Tier 3: Lighthouse. CoE-led build for strategic use cases the steering committee designates as enterprise priorities. Full CoE funding. Target: 10% of activity.

If your CoE is 80% Tier 3, you are running a project shop, not a CoE. Course-correct fast.

Quarterly cadence

The CoE runs on a quarterly drumbeat that mirrors the steering committee.

  1. Week 1: portfolio review with business unit leads
  2. Week 2: risk and policy review with CISO, GC
  3. Week 3: platform review (cost, uptime, eval results)
  4. Week 4: external scan (model releases, vendor changes, regulatory shifts) and roadmap update

Add an annual after-action review in Q4. What use cases did we kill, what did we ship, what did we learn, what is changing about the charter for next year.

Common failure patterns

  1. The CoE owns too much. Within a year it is a bottleneck and BU leaders route around it.
  2. The CoE owns too little. Within a year it is a research team with no business impact.
  3. The funding model is unclear. Within six months, BU CFOs are at war with the CIO over allocation.
  4. The CoE leader is a brilliant technologist with no operating experience. Within 18 months they are exhausted and the program loses momentum.
  5. The charter is never revised. The original 2024 charter is still on the wiki in 2027 and nobody references it.

Sample charter outline you can steal

Here is a one-page outline you can drop into Confluence or Notion and adapt. Keep it tight.

``` AI Center of Excellence Charter — v1.0

  1. Mission (1 sentence)
  2. Scope
    • In scope (5 bullets)
    • Out of scope (5 bullets)
  3. Operating Model
    • Hub and spoke / centralized / federated
    • Tier 1 self-serve, Tier 2 co-build, Tier 3 lighthouse
  4. Decision Rights (RACI table)
  5. Success Metrics (3-5 outcomes)
  6. Funding Model (GL / chargeback / hybrid)
  7. Engagement Model with Business Units
  8. Quarterly Cadence
  9. Review Cycle (charter v2 due Q4) ```

If your draft charter is significantly longer than this outline, you have not done enough editing. Long charters do not get read. Short charters get referenced.

Reporting line: who does the CoE Director actually report to?

This is one of the most consequential structural decisions and it is often made casually. Four common reporting lines, each with consequences.

Reports to CIO. Most common. Pros: aligns with platform, security, and data governance. Cons: AI gets treated as IT, and the business may underinvest. Works when the CIO has strong business unit relationships.

Reports to CDAO (Chief Data and Analytics Officer). Increasingly common at data-mature firms. Pros: AI is treated as a continuation of analytics, data foundations are tight. Cons: the CDAO is often a technical leader without the operating authority to drive business unit change.

Reports to COO. Best when AI is primarily about operating efficiency. Pros: business unit alignment is automatic, change management is the COO's day job. Cons: the platform and security tradeoffs can get short-changed.

Reports to CEO directly. Rare and usually a mistake unless AI is genuinely the company strategy. Pros: maximum visibility and resources. Cons: the role becomes a high-pressure, high-visibility seat where the wrong person burns out in 18 months.

The right choice depends on your operating reality. If your data foundations are weak, report to CDAO and fix that first. If your operations are the value driver, report to COO. Most enterprises in the messy middle land at CIO and that is fine.

A note on the AI Council versus the AI CoE

Do not conflate these. The AI Council (or Steering Committee) is the governing body: executive members, quarterly cadence, decision rights over policy, major investment, and risk acceptance. The CoE is the operational team that executes the program and serves the business units.

Mature programs have both. The Council sets the destination; the CoE drives the route. Confusing the two leads to either a Council that micromanages or a CoE that operates without sanction.

Next steps

The charter is the easy part to draft and the hard part to enforce. We have facilitated dozens of these and the pattern is consistent: the first 90 days set the operating posture for years. If you want a sounding board on which model fits your org, or a facilitated charter workshop, that is exactly the kind of engagement One Frequency runs at the start of CoE buildouts.

View All Insights
NEXT STEP

Ready to ship the next outcome?

One Frequency Consulting brings 25+ years of technology leadership and military discipline to every engagement. First call is operator-grade scoping — sixty minutes, no charge.