How to write an AI usage policy your team will actually follow
Most AI usage policies are either too vague to be useful or so restrictive they get ignored. Here is the framework that actually works.
Before your team starts using Claude in earnest, you need a usage policy. Not a 12-page legal document — a clear, practical set of rules that tells people what they can use Claude for, what requires human review, and what is off-limits.
Most organisations get this wrong in one of two ways: they write something so vague it provides no guidance ("use AI responsibly"), or so restrictive it immediately gets ignored ("all AI outputs must be reviewed by legal"). Here is the framework that actually holds up in practice.
The three-zone model
Think of usage in three zones:
Zone 1: Use freely
Tasks where Claude's output is an input to a human's work — not an output that goes directly to anyone. Internal drafts, research summaries, analysis, brainstorming, reformatting, summarising meeting notes. Claude makes work faster; a human still decides what to do with the output.
Zone 2: Use with review
Tasks where Claude's output goes to another person — internal or external — but where the stakes are moderate. First drafts of customer emails, project updates, documentation, reports. The human reviews and edits before sending. Claude accelerates; a human approves.
Zone 3: Not without specific approval
Tasks with real-world consequences that are hard to reverse: anything involving financial commitments, legal agreements, medical advice, HR decisions, or binding representations to customers. These need explicit approval from the relevant function head before Claude is used in the workflow.
The zones are not about capability — Claude can do Zone 3 tasks competently. They are about accountability. Zone 3 items are the ones that, if wrong, create real problems.
The five things every policy must cover
1. Data you cannot share with Claude
Be explicit. Typically: personal data about employees or customers (names, contact details, health information, salary data), confidential financial information, passwords and credentials, legally privileged documents, and anything covered by NDA.
If your company handles healthcare, financial, or legal data, talk to your legal and compliance team before finalising this section. Claude.ai (on Team and Enterprise plans) has strong privacy defaults, but the policy should reflect your specific obligations.
2. What requires human review before use
Anything in Zone 2. State it plainly: "All Claude-drafted external communications must be reviewed by the sender before sending." One sentence. No ambiguity.
3. What to do when Claude gets something wrong
Tell people how to report problems. A Slack channel, an email address, a form — the specific channel matters less than having one. If people know where to flag bad outputs, you learn about problems early. If they don't, you find out after something has gone wrong.
4. Who owns the policy
Someone has to be responsible for updating it as Claude evolves. Put a name or role. "Policy owner: [IT Lead / Head of Operations / whoever you are]." Review it every six months — Claude's capabilities and your organisation's usage will both change.
5. How to get help
Who can team members ask when they are not sure if something is OK? Name the person or channel. This is the most commonly missing element in AI policies, and the most important — it is the difference between a policy that empowers people and one that creates anxiety.
What to skip
Do not include:
- Long sections about how AI works. Your team does not need to know about tokens to use Claude safely.
- Warnings about everything that could go wrong. Lead with what people can do, not with what might go wrong.
- Philosophical statements about AI ethics. Save those for a different document. The usage policy should be operational.
Format
One page. Bullet points. Plain language. Date it and sign it from whoever has authority in your organisation. Put it somewhere people will actually find it — not buried in a wiki nobody visits.
A good usage policy is not a legal shield. It is a practical guide that helps people make decisions confidently. The less time anyone spends thinking "can I use Claude for this?" the more time they spend actually using it well.