Security and privacy for Claude admins: what you need to know
Before you roll Claude out to your team, you need to understand what Anthropic does with your data — and what your responsibilities are. Here is the non-alarmist version.
Security and privacy are the questions every IT admin gets asked before they can roll out any new tool. With AI, the questions are more loaded than usual — there are genuine uncertainties, a lot of vendor marketing about "enterprise-grade security," and real variation in what different plans actually offer.
Here is what you actually need to know as a Claude admin.
What Anthropic does with your data
On Claude.ai Team and Enterprise plans, Anthropic does not use your conversations to train its models. This is a contractual commitment, not just a policy statement. The data you and your team send to Claude through these plans is not training data.
On Free and Pro plans, the default is different — Anthropic may use conversations to improve Claude, though you can opt out in settings. This is standard for consumer AI products. If you are rolling out to a team, you should be on Team or Enterprise — which resolves this concern.
What Anthropic does do: process your messages to provide the service, store conversation history per your plan's data retention settings, and log usage for safety monitoring. This is standard for any cloud software service.
Data residency and retention
On Team plans, data is processed in Anthropic's infrastructure with standard retention periods. On Enterprise, you can negotiate custom data retention policies and, depending on agreement terms, specific infrastructure options.
If your organisation has data residency requirements — your data must stay within a specific geographic region — this is an Enterprise-level conversation. Standard Team plans do not offer geographic data isolation guarantees. Raise this with your Anthropic account contact before committing.
What your team should not put into Claude
Regardless of plan, there is a category of data that should not go into Claude or any third-party AI tool without explicit review:
- Personal data about customers or employees (names, email addresses, health information, financial records) — check your obligations under GDPR, CCPA, or relevant regulations before processing personal data through a third-party AI
- Credentials and passwords — obvious, but worth stating explicitly
- Legally privileged communications — attorney-client privilege may not survive processing through a third-party system without careful controls
- Regulated financial or health data — HIPAA, PCI, and similar regulations have specific requirements about where data can be processed
This is not a reason not to use Claude. It is a reason to be thoughtful about what data enters conversations. Your usage policy should make this explicit.
SSO and access control
Single sign-on (SSO) is an Enterprise feature. It lets you manage Claude access through your existing identity provider (Okta, Azure AD, Google Workspace, etc.) — so users log into Claude with their company credentials, and you can provision and deprovision access through your standard IT processes.
On Team plans, you manage access through Anthropic's admin console: you invite users by email, remove them when they leave, and manage their roles. This is adequate for most organisations but requires you to actively maintain the list. SSO automates this via your IdP.
If your IT policy requires SSO for all cloud tools, you need Enterprise. If it is a preference rather than a requirement, Team's manual management is workable for organisations under ~100 people.
Practical security steps for Team plan admins
Audit who has access quarterly. Remove departed employees. Check that roles are appropriate.
Review the Connector permissions you have enabled. Each active Connector has access to read from an external service using someone's credentials. Periodically check which Connectors are active and whether the access level is appropriate.
Set a clear policy on what data goes into Claude. Write it down, communicate it, and include it in onboarding for new team members. "Use judgment" is not a data policy.
Know Anthropic's current terms. Anthropic's data practices evolve. The authoritative source is Anthropic's privacy policy and your plan's DPA (Data Processing Agreement), not this article. Review these when you sign up and when Anthropic announces significant changes.
The proportionate response
Claude is a cloud software tool. The security and privacy considerations are similar in kind to Google Workspace, Salesforce, or any other SaaS product your company uses. The questions to ask are the same: what data goes in, who can access it, what does the vendor do with it, and how do you manage access.
For most organisations using Team plan for standard knowledge work tasks, the risk profile is manageable with straightforward controls. For organisations with regulated data, healthcare, or specific compliance requirements, the Enterprise conversation is worth having before rolling out — not after.