AI Codex
Agents & OrchestrationCore Definition

When Claude stops answering and starts doing

There's a clean line between a model that responds to questions and one that takes actions in the world. Understanding that line is the most important thing to know about building with AI right now.

5 min read·AI Agent

Every AI interaction you've had so far probably looks like this: you type something, the model responds, you read the response and decide what to do next. The model talks. You act.

An AI agent flips this. The model doesn't just respond — it plans, executes, checks its own work, and keeps going until the task is done.

Think of the difference between asking a colleague "what should I do about this customer complaint?" versus handing them the complaint and saying "handle it." The first is a conversation. The second is delegation.

That's the shift from language model to agent.

What makes something an agent

An AI agent has three things a basic chatbot doesn't:

Tools. The ability to take actions beyond generating text — searching the web, reading files, writing code, calling APIs, sending messages, querying databases. Tools are how an agent reaches beyond the conversation into the world.

A goal, not just a prompt. Instead of responding to a single input, an agent works toward an outcome. "Summarize this document" is a prompt. "Research our three main competitors and produce a comparison report with sources" is an agent task.

A loop. The agent takes an action, observes what happened, decides what to do next, and repeats. This continues until the task is complete — or the agent determines it's stuck and asks for help.

How Claude approaches agentic tasks

Claude is designed to be careful with this kind of power. When operating as an agent, Claude:

Prefers reversible actions. If Claude can read a file or copy it before editing, it will. The goal is to minimize hard-to-undo mistakes.

Checks in when uncertain. Rather than forge ahead when a decision point is ambiguous, Claude pauses and asks. You can configure how autonomous or conservative it should be depending on the stakes.

Maintains a minimal footprint. Claude doesn't request permissions it doesn't need, doesn't retain data beyond the task, and doesn't take side actions outside the stated goal.

These aren't just design choices — they reflect Anthropic's approach to building AI that's powerful without being reckless. An agent with good judgment should act like a contractor who asks before drilling into a wall, not one who assumes.

The practical shapes of agentic Claude

In real deployments, Claude-as-agent usually takes one of these forms:

Single-agent with tools. Claude has access to tools (web search, code execution, file system) and works through a task autonomously. Good for well-defined workflows.

Orchestrator and subagents. A top-level Claude instance breaks a complex task into pieces and assigns them to specialized sub-instances. The orchestrator synthesizes the results. Good for tasks that benefit from parallelism.

human-in-the-loop. Claude handles everything it can autonomously and surfaces decision points requiring human judgment. The human approves or redirects, Claude continues.

The thing most people miss

The shift to agents changes your relationship with the AI. You're no longer reading every output before anything happens. That means the quality of your instructions, the quality of your tools, and your error-handling strategy all matter much more than they did when Claude was just answering questions.

The upside: tasks that used to take hours of back-and-forth can be delegated completely. The skill that unlocks this is learning to write goals, not prompts.