When not to use Claude
Claude is genuinely powerful. It is also genuinely wrong for certain kinds of work. Knowing the difference is as important as knowing what it does well.
There is a lot of content about what Claude can do. Less about what it reliably cannot, or should not, do. This is the version of that conversation.
Knowing when not to use Claude is not pessimism — it is the thing that keeps your team from building misplaced trust, and the thing that keeps AI producing value instead of problems.
Do not use Claude as a source of current facts
Claude's training has a cutoff date. It does not know about things that happened after that date unless you use web search or provide the information directly. If you ask Claude about the latest regulatory guidance, a competitor's recent product launch, or current market pricing, you may get an answer that is confidently wrong.
The fix is not "never use Claude for this" — it is "always verify current facts against a current source." Web search Skills help here, but even those have limits. Claude synthesises; it does not replace primary sources.
Do not use Claude for arithmetic you are relying on
Claude can appear to do maths. It often gets it wrong. Not always — but enough that you cannot rely on it for any calculation that matters. This is a known characteristic of large language models: they are trained on text, not arithmetic, and they pattern-match more than they calculate.
For any numbers that will appear in a document, a decision, or a communication: use a spreadsheet. Claude can interpret numbers, explain trends, and write about financial results. It should not produce the numbers themselves.
Do not use Claude where the output has legal or regulatory force
Claude can draft a contract. It can summarise regulatory requirements. It can explain what a clause means. It cannot tell you whether a specific contract is enforceable in your jurisdiction, whether your practice meets a specific regulatory standard, or what the right legal position is in a dispute. These require professional judgment from a qualified person — judgment that is informed by context Claude does not have access to and is accountable in ways Claude is not.
Use Claude to prepare materials for legal review, not to replace it.
Do not use Claude where getting it wrong is irreversible
Send-on-behalf-of emails. Irreversible financial instructions. Public statements. Anything that goes out under your name and cannot be recalled. Claude can draft these things. A human should review them before they go anywhere.
The principle: if the downside of a wrong output is significant and cannot be undone, treat Claude's output as a draft, not a deliverable. The value is still there — drafting is faster than writing from scratch — but the accountability stays with the human.
Do not use Claude to replace expertise you need
Claude has read a great deal about medicine, law, accounting, and engineering. It is not a doctor, a lawyer, an accountant, or an engineer. It does not have professional accountability, it cannot examine your specific situation with the depth a professional can, and it does not know what it does not know in the way a professional does.
For complex decisions in specialised domains, Claude can help you understand the domain, prepare questions, and organise information. It is a good way to get up to speed before talking to an expert. It is not a substitute for the expert.
Do not use Claude where hallucination risk is high and undetectable
Some tasks make it easy to catch Claude when it is wrong — if it misquotes a policy you know well, you notice. Some tasks make it hard — if it summarises a document you have not read and fabricates a detail, you will not catch it without re-reading the original.
For tasks where you cannot verify the output and the stakes are high, Claude adds risk rather than reducing it. The solution is not to use Claude less — it is to structure the workflow so human verification is built in. But know when you are in this territory.
The actual rule
Claude is excellent at tasks where: the output is a draft or input to a human's work, errors are visible and correctable, and the work benefits from speed and scale. It is the wrong tool when: the output is a final answer, errors are consequential and hard to spot, or the task requires accountability that must sit with a professional.
Most knowledge work has both kinds of tasks. The teams that use Claude well are the ones that have thought clearly about which is which.