What executives actually need to know about AI at work
In brief
You're not implementing it yourself — but you're approving the budget, setting expectations, and accountable for the results. Here's the executive-level view: what to ask, what to expect, and where things go wrong.
Contents
You have probably been asked to approve a budget for AI tools, weigh in on an AI policy, or understand why a Claude rollout is "going slower than expected." Maybe your team has been experimenting for six months and you're not sure if anything has changed.
This is the executive-level view. Not how to use Claude — how to govern a Claude implementation, ask the right questions, and know when something is working versus when you're being told what you want to hear.
The frame that matters most: AI is a capability, not a project
Most AI rollouts fail not because the tools don't work, but because organisations treat them like a project with an end date. "We rolled out Claude" is not a meaningful outcome. The outcome is whether specific people are doing specific work differently — faster, better, or cheaper.
Your job is to push past the adoption metrics (seat licenses used, logins this month) and ask about the actual work. Which teams are doing something they couldn't do before? Which workflows have changed? Where has it saved real time?
If no one can answer those questions concretely after three months, you have a tool deployment, not an AI capability.
The four questions to ask your team
1. Which teams are using it — and what are they using it for?
Not "are people logging in" but "what are they actually doing." Ask for specific examples. A CS team that drafts ticket responses 40% faster is a result. "Everyone is using it more" is not.
2. What have you configured that you couldn't just do with the default?
A real deployment involves Projects with custom system prompts, relevant documents loaded, and Skills or Connectors that connect Claude to your actual data. If the answer is "nothing — people just use Claude.ai," the organisation has not deployed Claude. Individuals have personal accounts.
3. What's the cost per meaningful outcome?
Your admin should be tracking token usage by team. More importantly, they should be able to map usage to outputs — tickets handled, documents produced, research completed. If you can't connect spend to outcomes, you can't manage the investment.
4. What's in the way?
The blockers that slow AI adoption in most organisations are not technical. They are: unclear guidance on what's allowed (people default to not using it), lack of a configured starting point (everyone reinvents the wheel), and no social proof (people don't see colleagues getting value, so they don't try). Ask your admin what the actual friction is — and whether it's been addressed.
What realistic ROI looks like
Two traps to avoid:
The vanity trap: Your team shows you hours saved based on self-reported surveys. This is real but hard to verify. More useful: ask for specific workflow comparisons — how long did this task take before, how long now, in a real example.
The AI hype trap: Claude can do almost anything, so almost everything sounds like an opportunity. Focus on the 3–5 use cases that involve your highest-volume, highest-value work. A great AI use case has: high frequency (daily or weekly), significant time cost, text-heavy or research-heavy nature, and consistent enough inputs that the AI can be configured well.
Measuring AI ROI covers the full framework. The key executive insight: ROI compounds when the same people use Claude for progressively more complex work, not when you expand to new teams too quickly.
Where things go wrong at the executive level
Setting unrealistic timelines. Real AI adoption takes 90–120 days to show measurable impact at team level. Organisations that declare success at 30 days (or failure at 45) have not run a real experiment.
Confusing deployment with adoption. Sending a "we now have Claude" company announcement and distributing licenses is day zero, not day one. Adoption requires configured tools, active training, and someone accountable for each team's use case.
Delegating without a mandate. You cannot ask someone to "lead AI implementation" and then not give them the authority to change how teams work. The admin can configure Claude brilliantly, but if department heads aren't accountable for their team's adoption, it stays optional — and optional things don't get prioritised.
Treating AI policy as a legal exercise. AI usage policies written by legal teams tend to produce long lists of things people cannot do. What you need is a short, clear framework that tells people what they can do, what requires review, and what's off-limits. See How to write an AI usage policy your team will actually follow.
What to expect at each stage
Month 1: Mostly exploration. Early adopters find use cases. Expect chaos in the first Project structures. This is normal.
Month 2–3: Teams start to develop consistent workflows. The first real time savings appear. Expect to hear from teams that are stuck — usually because their system prompt is vague or they haven't uploaded the right documents.
Month 3–6: Compounding starts. Teams that have been using Claude consistently start using it for progressively harder work. The gap between teams that adopted early and those that haven't becomes visible.
Month 6+: The question shifts from "is it working" to "what do we do next." New capabilities (agents, automated workflows, deeper integrations) become relevant. The administration work increases — you now need proper governance, not just setup.
Your single highest-leverage action
Make one department head accountable for their team's AI adoption — with a specific outcome to hit by a specific date. Not "use Claude more." Something like: "CS team handles 20% more tickets per person without quality drop by end of Q2." Then resource them properly and get out of the way.
The organisations that get real value from AI are not the ones with the biggest budgets or the most sophisticated setup. They are the ones where someone is genuinely accountable for making it work.
Further reading
- Introducing the Max Plan — what the Max plan includes for power users
- Eight trends defining how software gets built in 2026 — the broader context for AI at work