AI Codex
Business Strategy & ROIFailure Modes

Getting your team to actually use the agent — the change management problem nobody warned you about

In brief

Building the agent is the easy part. Getting people to use it is where most Agent Operators fail. Three types of resistance — trust, speed, job fear — each with a different fix. And one thing that kills adoption faster than anything else.

8 min read·AI Agent

Contents

Sign in to save

You built the agent. Your test cases pass. The output quality is good — you've checked it carefully, you know it works. You rolled it out to the team two weeks ago. And half of them are still doing the task manually.

Some are politely using the agent and then redoing the work themselves. Some are routing around it by messaging you directly. One or two are vocal: "I just think it's faster if I do it myself." And a few are clearly anxious in a way they haven't quite said out loud.

This is change management. And it's the part of the Agent Operator job that nobody prepares you for, because most of the writing about AI at work is about the technology, not about the humans who have to change their behavior to use it.

The technology working is necessary but not sufficient. The adoption is the actual job.


Why adoption fails

The reasons people don't adopt a working tool fall into three categories. They're distinct problems with distinct fixes. Treating all resistance the same way — more training, more encouragement, more mandates — is why most rollouts underperform.


Resistance Type 1: "I don't trust it"

What they're saying: The agent made a mistake in front of them — or they heard it made a mistake for someone else — and now they assume it's always wrong. Or they've never used AI and they have a general wariness about whether it's reliable.

What's actually going on: They don't have a mental model for when to trust it and when to check it. They're not being irrational — they're being appropriately cautious when they don't know the error rate or the failure pattern. The agent that's 95% accurate is still wrong 1 in 20 times. If they don't know which 1 in 20, everything feels suspect.

What doesn't work: General reassurance. "It's really good now, trust me" doesn't give them anything to act on. Showing them it's 95% accurate doesn't tell them which 5% they need to verify.

What works: Give them a specific rule. Tell them exactly when to trust it without checking, and exactly when to verify before using the output.

"Trust it for policy lookups — the policy documents are updated weekly and it's been accurate on 19 of our last 20 test cases. Always verify before forwarding contract clause references to external parties."

Specificity reduces anxiety more than reassurance. They're not asking for a guarantee — they're asking for a mental model. Give them one.


Resistance Type 2: "It's slower than doing it myself"

What they're saying: For this specific task, using the agent takes longer than the manual workflow.

What's actually going on: They're usually right, for them. Here's why: the people who are fastest at the manual workflow are the ones who have done it hundreds of times. They've developed shortcuts, muscle memory, pattern recognition. For them, opening a new interface, writing a prompt, waiting for output, and reviewing it adds friction compared to a workflow they could do in their sleep.

The agent isn't slower for everyone — it's slower for the people who were already fast. It's often significantly faster for people doing the task for the first time, or doing it infrequently, or who haven't had time to develop the expert shortcuts.

What doesn't work: Arguing that the agent is faster. For the expert user, it may genuinely not be, at least not yet.

What works: Stop targeting the expert users first. Target the people who will actually be faster with the agent:

  • New hires who haven't learned the manual workflow yet
  • People who do the task occasionally, not daily
  • People who are doing a task slightly outside their core expertise

Let the agent prove itself with this group. Measure the time savings. Then bring those numbers back to the skeptics: "The two people we onboarded last month are doing this task in 20 minutes. Before the agent, it was taking new hires 45–60 minutes for the first few months."


Resistance Type 3: "What happens to my job?"

What they're saying: (Usually not out loud) — if the agent does this task, what do I do?

What's actually going on: This is fear, often unspoken, often inarticulate, often experienced as something else entirely. The person who says "I just prefer doing it myself" might be saying something that's genuinely true about speed — or they might be saying something about how unsettled they feel about what's changing.

You don't have to be a psychologist to address this. But you do have to be direct.

What doesn't work: Vague reassurance. "Don't worry, nobody's losing their job over this" is something people have heard before, often before being told about a restructuring. It doesn't reduce fear — it signals that you're not going to be straight with them.

What works: Naming what changes and what doesn't, specifically.

"The agent handles first-line invoice categorization. That used to take 2 hours a week. Your job is now handling the 15% of invoices that the agent flags for review — the complex cases, the new suppliers, the ones where something is unusual. That work is more interesting and it's the work that matters."

If the honest version of this conversation is "yes, we're going to need fewer people to do this function," that's a harder conversation. But it's a conversation that deserves to be had directly, not managed with ambiguity. The people who are most anxious are usually the ones who sense that nobody's being straight with them.


The rollout approach that actually works

Don't mandate. Find the early adopter.

There's always one person in every team who's curious, who's been using AI in their personal life, who's slightly frustrated with the manual workflow and eager for a better one. Find them before you build the agent. Build the agent around their specific workflow and let them help you test it. When you roll out, they become the internal advocate — "I've been using this for three weeks and here's what it changed for me" carries more weight than any announcement you'll ever send.

Mandated adoption creates compliance. Organic advocacy creates actual use.

Give them their own numbers.

General statistics about AI productivity don't change behavior. Your team's own before-and-after numbers do.

If you can, have a few early adopters track their time on the relevant task for two weeks before the agent rollout. Track it after. "Before: 3.5 hours a week. After: 50 minutes" is concrete enough to believe and concrete enough to act on. People trust data from their own team more than anything else.

Stay present during rollout.

The biggest mistake Agent Operators make: build the agent, send the announcement, disappear to build the next thing. Then wonder three months later why adoption is at 40%.

For the first month after any rollout, check in with the team weekly. Specifically ask: "What's working? What's frustrating? What's making you route around it?" Collect the friction points. Fix the ones you can fix quickly. Communicate the ones that are on your list.

Being visibly present — "I know the contract reference format is annoying, I'm fixing it this week" — builds the trust that drives actual adoption.


The thing that kills adoption faster than anything else

A confident wrong answer in a high-stakes moment.

Every time the agent produces a bad output that a real user acts on — sends to a client, uses in a decision, shares with leadership — you get a story. That story spreads. "I used the AI thing and it told me the wrong delivery date and I passed it on to the client and we had to call them back and apologize." That story reaches everyone in the team within a week, and the half of them who were already skeptical will cite it for the next six months.

You cannot prevent this entirely. But you can dramatically reduce the frequency. Run your evaluations. Test your edge cases. Know the failure modes before you expand scope. Don't roll out to 30 people until the agent is passing 18/20 test cases consistently.

The adoption math works both ways: one bad public failure can undo three months of good adoption progress. One visible, well-documented win — "it caught a clause discrepancy that would have cost us $40K" — can accelerate adoption faster than any training session.


Try this today

Look at the team the agent is deployed to. Find the one or two people who seem to have genuine reservations — not the vocal skeptics who will complain about anything, but the thoughtful, capable people who just aren't using it much.

Have a direct conversation. Ask them: what would have to be true for you to trust this? What specifically feels uncertain?

You will learn more in that 20-minute conversation than in all the usage analytics you've been looking at. And you'll almost certainly find one specific, fixable thing that's driving their hesitation.

Weekly brief

For people actually using Claude at work.

Each week: one thing Claude can do in your work that most people haven't figured out yet — plus the failure modes to avoid. No tutorials. No hype.

No spam. Unsubscribe anytime.

What to read next

Picked for where you are now

All articles →