Articles
Practical AI, by persona.
Decision guides, implementation patterns, and what building with AI actually looks like. Pick your context — the reading list adjusts.
Everything published, by type.
If you're new, read these first.
New to AI? Start here.
You've heard about AI. You've maybe tried it once, got something weird, and closed the tab. This is the honest version of what it is, why most people's first attempts disappoint them, and what to actually do.
What's safe to share with Claude — and what isn't
The most common reason professionals hold back from using Claude for real work: they're not sure whether they're putting confidential information at risk. Here's the direct answer.
How to write a good prompt
Most prompt problems aren't caused by bad AI — they're caused by the same three things: no role, no context, no constraint. Here's how to fix all three in under two minutes.
The most common Claude mistakes — and how to fix each one
The mistakes that cause 80% of Claude frustration are predictable, and most of them happen in the first sentence of your message. Here's the full list, in order of how much they cost you.
Why starting a new chat almost always saves money
Ten turns in one conversation costs roughly 5.5x as much as ten separate conversations. Here's why that happens, when it matters, and the one habit that changes how you use Claude.
Claude Code vs. the web app: which one should you use?
They are different tools for different jobs. The web app is for thinking, writing, and analysis. Claude Code is for working inside your codebase. Here is the decision guide — including when neither is the right answer.
Five habits that separate operators who get results from those who don't
The gap between people who get consistent value from Claude and people who don't isn't intelligence or skill — it's four habits most people never develop. Here's what effective Claude users actually do differently.
Running your first AI pilot: a 30-day plan
Most AI pilots succeed technically and fail politically. The evidence exists — it just wasn't collected in a way anyone can act on. Here's how to design a pilot that produces results your organization will actually use.
Your first Claude API call: what you actually need to know
The official quickstart gets you to 'Hello, world.' This gets you to understanding why Claude gave you a worse answer than the web app — and exactly how to fix it.
How to build a business case for Claude at your company
Your manager doesn't need an AI pitch — they need a staffing and efficiency argument with AI as the mechanism. Here's how to write it so leadership can say yes.
Implementation guides. Production-focused, code included.
Your first Claude API call: what you actually need to know
The official quickstart gets you to 'Hello, world.' This gets you to understanding why Claude gave you a worse answer than the web app — and exactly how to fix it.
Streaming Claude responses: implementation patterns and the tradeoffs
When to stream, how to implement it properly in Python and TypeScript, error handling mid-stream, and the UX patterns that actually work.
Building a RAG pipeline from scratch: the decisions that actually matter
Building RAG is easy. Building RAG that doesn't silently degrade over time is hard. Here's the production-ready version — including the retrieval failures most tutorials don't mention.
Writing evals that catch regressions before your users do
What to measure, how to structure test cases, and how to run evals in CI so that prompt changes and model updates don't silently break your product.
Prompt caching with Claude: cut costs 80% on repeated context
Prompt caching can cut your Claude API costs by 80% on the requests that matter most. Here's exactly how to implement it, why most teams cache the wrong things first, and what to fix.
Cutting Claude API costs without cutting quality
Token budgets, model routing, caching, batching, and the decisions that have the biggest impact on your monthly bill.
Tool use with Claude: the complete implementation guide
Defining tools, parsing responses, handling multi-turn tool calls, parallel tool use, and the failure modes that will bite you in production.
Multi-agent orchestration: when one Claude isn't enough
Subagents, orchestrators, parallelism, and state management. The patterns that work and the ones that look good until they hit production.
How to evaluate whether your multi-agent pipeline is actually better
Most multi-agent systems are shipped without ever measuring if they beat a single well-prompted Claude call. Here is the evaluation methodology: golden datasets, LLM-as-judge scoring, ablation testing, and the cost/latency tradeoff equation.
Building a Claude chatbot that remembers users across sessions
Persistent memory for chatbots is not a Claude feature — it is an architecture decision. Here is how to build it correctly.
Deploying a Claude application: from localhost to production
Environment variables, rate limits, error handling, costs, and the things that bite you on your first production deploy. A practical checklist.
Monitoring a Claude app in production: what to log and what to alert on
Claude API calls are invisible unless you instrument them. Here is the logging structure, the metrics that actually matter, what Anthropic rate limiting looks like in practice, and the alert thresholds worth setting.
Production error handling for Claude applications
The errors you will definitely hit, the ones that will surprise you, and the patterns that make your app resilient when Claude or the API behaves unexpectedly.
Building a streaming chatbot with Next.js and Claude
End to end: API route, streaming SSE to the browser, React state, conversation history, and deployment. The complete working pattern.
Rate limiting patterns for multi-user Claude apps
When your app has more than one user, naive retry logic is not enough. Token budgeting, per-user quotas, request queuing, and graceful degradation — in code.
Adding authentication to your Claude app with NextAuth
Protect your API routes, tie conversations to users, and track per-user costs. The full integration from NextAuth setup to authenticated Claude calls.
Database-backed conversation history with Supabase and Claude
In-memory arrays disappear on page reload. How to persist conversation history to Supabase, load it back on session resume, and prune context intelligently.
Should I use Claude or build my own model?
The question most AI founders ask in month two. The honest answer covers fine-tuning economics, the cases where Claude is genuinely insufficient, and the trap of premature optimization.
How to use Claude with Linear: a practical guide
Linear is where engineering teams track issues and ship work. Claude helps you write better issues, triage faster, and generate the surrounding documentation that always falls behind. Here's the practical guide.
How to use Claude with Jira: a practical guide
Jira tracks your engineering work. Claude helps you write better issues, generate release notes, summarize sprint status for stakeholders, and turn customer language into engineer-readable tickets. Here's the practical guide.
Setting up Claude Code for your team: the leverage-ordered guide
The .claude folder has five layers. Most teams set up one and wonder why they keep correcting Claude. Here is what to configure in what order — and what you can skip entirely.
Claude Code vs. the web app: which one should you use?
They are different tools for different jobs. The web app is for thinking, writing, and analysis. Claude Code is for working inside your codebase. Here is the decision guide — including when neither is the right answer.
The agent harness: why your infrastructure matters more than your model
Most developers focus on the model. The engineers building production AI applications focus on everything around it. Here is what the agent harness is, why it determines whether your app actually works, and where to start building it intentionally.
The advisor tool: Opus-level reasoning at Sonnet prices
A new Claude API feature lets Sonnet or Haiku call Opus mid-task when they need help. You pay Opus rates only for those calls — everything else runs at Sonnet or Haiku cost. Here's what it does and when to use it.
Claude Managed Agents: a hosted agent loop without the infrastructure
Anthropic now runs the full agent loop for you — sandboxed execution, built-in tools, and event streaming included. Here's what you get and when it makes sense over building the loop yourself.
Claude + Confluence: keeping documentation from going stale
Confluence pages rot the moment the engineer who wrote them moves on to the next thing. Claude doesn't prevent that — but it makes the writing and updating fast enough that it might actually happen.
Security issues in Claude-powered apps (and how to avoid them)
The security vulnerabilities in most Claude apps aren't exotic — they're the same three mistakes: leaking system prompts, ignoring prompt injection, and trusting user input in tool calls. Here's how to fix all three.
When to use streaming — and when not to
Streaming makes sense when the user is waiting to read. It makes less sense when you need the complete output before doing anything with it. Here is the decision framework and the patterns for each.
Multi-agent failure handling: timeouts, partial outputs, and recovery patterns
Agents fail differently than APIs. When a sub-agent times out halfway through a pipeline, you don't just get an error — you get partial state. The patterns that make multi-agent systems actually recover.
Auditing your eval suite: are you testing the right things?
Most eval suites test what was easy to write, not what matters most. A structured audit finds the gaps before production does — coverage blind spots, flaky assertions, and the failure modes you forgot to cover.
The ant CLI: interact with the Claude API from your terminal
Anthropic's ant CLI gives you direct access to every Claude API endpoint from the command line — messages, models, batch jobs, agents. Install it, set your API key, and run API calls without writing application code.
Claude Opus 4.7: what's new and what the API changes mean
Anthropic's flagship model gets a significant upgrade — better software engineering, 3.75-megapixel vision, and a new xhigh effort level. Pricing is unchanged. There are API breaking changes versus Opus 4.6.
Migrating from Claude Opus 4.6 to Opus 4.7: the breaking changes
Opus 4.7 has five API breaking changes that will return 400 errors on existing code. Here's each one, the fix, and the full migration checklist. Also: Sonnet 4 and Opus 4 retire June 15, 2026.
Parallel agents in Claude Code: the desktop redesign
Claude Code's desktop app was rebuilt for running multiple coding tasks at once. A new sidebar manages sessions across repos, an integrated terminal and diff viewer replace external tools, and side chat lets you branch conversations without interrupting ongoing work.
Claude Code anti-patterns: what to stop doing
The failure-mode companion to every Claude Code setup guide. CLAUDE.md that's too big, MCP servers left running, sessions you never close, skipping Plan Mode, asking quick questions in the main thread — here's what each one costs you and how to fix it.
Decision guides for the questions that actually matter.
What AI Is Actually Doing to Your Job
Block is eliminating middle management. Altman and Amodei predict the first billion-dollar one-person company. 36% of B2B companies already cut their SDR teams. The future of work isn't theoretical — it's playing out role by role, with specific data, right now.
Claude Code: what it is and whether your engineering team should use it
Claude Code is a different product from Claude.ai — it is an agentic coding tool that runs in the terminal. Here is what it does and when it makes sense.
When not to use Claude
Claude is genuinely powerful. It is also genuinely wrong for certain kinds of work. Knowing the difference is as important as knowing what it does well.
Opus, Sonnet, or Haiku: which Claude model should your team use?
Claude has three model tiers. Here is which one to use for what — and why defaulting to the most powerful one is usually a mistake.
Managed Agents: what they are and what they mean for your organisation
Anthropic just launched Managed Agents. Here is what they do, who they are for, and how to think about whether your team should use them.
Writing a system prompt that actually works
The system prompt is the highest-leverage thing you control when deploying Claude. Most are either too vague or too long. Here's what good looks like.
Prompt caching: why it matters when you're building with Claude at scale
If your application sends the same long system prompt on every request, you're paying to re-process it every time. Prompt caching stops that.
How to think about Claude's context window (and when it actually matters)
200,000 tokens sounds enormous. In practice, how you use that space changes everything about the quality of your outputs.
Skills and Connectors: how to make Claude actually useful at work
By default, Claude only knows what you tell it in the conversation. Skills and Connectors change that — here's what they do and which ones are worth turning on.
What to automate first with AI
Every company has ten things they could automate with AI. About two of them are actually good starting points. Here's the framework for finding them.
What MCP actually means for your business (it's not just for developers)
The Model Context Protocol sounds technical. The practical implication is simple: AI tools can now connect to your actual systems in a standardised, safe way. Here's what that means for how you work.
When to use extended thinking — and when it's a waste
Extended thinking makes Claude noticeably better on hard problems. But most tasks don't need it, and using it everywhere will slow you down and cost more.
How to set up Claude Projects for your team (and what most people miss)
Projects are the most underused feature in Claude. Here's how to configure them so your whole team gets consistent outputs — not whatever each person happens to type.
How to work with Claude when accuracy matters
Hallucination isn't a reason to avoid Claude for high-stakes work. It's a constraint to design around. Teams that get this right build AI into their most important workflows. Teams that don't, limit AI to the low-stakes ones.
How to know if your Claude integration is actually working
Most teams go live on gut feel and find out six weeks later that Claude has been quietly giving wrong answers. Here's how to know before that happens — without being an engineer.
Do you actually need RAG? The decision most operators get wrong
Most teams jump to RAG because it sounds like the right answer. Half of them didn't need it. Here's how to know which situation you're in — before you build anything.
The failure patterns that catch most teams off guard.
The month-4 problem: when Claude adoption quietly stops
Most teams who roll out Claude see strong early results and a quiet decline by month 4. It's not that Claude stopped working — it's that the rollout stopped. Here's what actually happened, and what to do about it.
Why your CLAUDE.md stops working — and how to keep it accurate
Most teams set up CLAUDE.md once and never touch it again. Three months later, Claude is ignoring parts of it, following instructions that no longer apply, and producing inconsistent output for reasons nobody can identify.
Why Claude keeps feeling inconsistent — and the actual fix
Claude isn't being random — inconsistency almost always has a specific cause you can find and fix. Here are the five most common ones, in order of how often they appear.
What AI actually cannot do
Most AI disappointment comes from the wrong expectations — not the wrong tool. Here is a plain-English list of what Claude genuinely can't do, so you know what to trust and what to verify.
What to tell clients about AI (and what not to)
The wrong AI conversation with a client creates a problem you'll spend months managing. The right one positions you as the person who gets it before everyone else does. Here's the script for both scenarios.
What goes wrong when founders build AI products
The failure modes for AI startups are specific and predictable. Most of them have nothing to do with the AI.
Why RAG implementations fail (and how to avoid the most common mistakes)
RAG is one of the most powerful things you can build with Claude. It's also where a lot of teams get stuck. Here are the failure patterns worth knowing before you start.
The hallucination patterns that catch operators off guard
Everyone knows AI can make things up. What surprises people is which specific situations trigger it — and how confident Claude sounds when it does.
Why your first AI pilot probably failed
Most AI pilots don't fail because the AI wasn't good enough. They fail for three very predictable reasons — none of which are technical.
The system prompt mistakes that make Claude worse, not better
More instructions don't mean better results. Most system prompts fail in one of five predictable ways — and fixing them is usually the highest-leverage thing you can do to improve your Claude integration.
Why most AI agent pilots fail in the first month
Building an AI agent that demos well is easy. Building one that works reliably in production is hard. The gap between the two is almost always one of the same five problems.
What it actually looks like when teams implement AI.
What using Claude actually looks like for a solo founder
The founders who use AI well don't use it for everything — they use it for the three tasks that used to eat half their week. Here's what those tasks are and exactly how the workflow runs.
What using Claude actually looks like for an ops manager
Operations work is mostly translation: turning messy reality into clear documentation, turning vendor conversations into decisions, turning incidents into processes. Here is where Claude actually fits.
What using Claude actually looks like for a marketing manager
Where Claude genuinely saves hours for marketing managers, where it falls flat, and what the actual workflow looks like.
What using Claude actually looks like for a CS manager
A CS manager who uses Claude well can do meaningful work on renewals, QBRs, and escalations in the gaps between other work. Here's what that workflow actually looks like across a full day.
Using Claude for customer discovery: what works and what makes it worse
Customer discovery is the one job where Claude is most dangerous if used wrong. Here's how to use it to prepare better, synthesize faster, and avoid the trap of letting it replace the conversations.
The weekly review with Claude: a system for actually thinking about your week
A weekly review sounds like a productivity cliché — until you do one with Claude. Here's the version that takes 20 minutes and actually changes what you do the following week.
Getting your first ten customers for an AI product
Distribution is the hard part. The ten moves that actually work at the earliest stage — before you have brand, before you have case studies, before you have anything except the product.
How to validate your startup idea using Claude (without fooling yourself)
Claude can't tell you if your idea is good. It can help you figure out whether your assumptions are wrong — before you spend three months building something nobody wants. Here's how to use it for that.
How HR teams use Claude to make onboarding actually work
New hires spend their first week confused and HR spends it answering the same 40 questions. Here is the workflow that fixes both — without building a chatbot.
How CS managers use Claude to prepare for QBRs and renewals
QBR prep used to take half a day per account. Here is the workflow that gets it to 45 minutes — without cutting corners on the things that actually matter.
Claude for engineering teams: beyond the obvious
Engineers are often the last team to adopt Claude, and the first to find the most interesting uses once they do. The field note on what actually works — beyond autocomplete.
What AI actually looks like for a data and analytics team
Data teams have a counterintuitive relationship with Claude — it is not about the analysis, it is about everything around it.
What AI actually looks like for a legal team
Legal has real limits with AI — and real opportunities. Here is where Claude fits in a legal context, and what should stay out of scope.
What AI actually looks like for a product team
Product managers spend a disproportionate amount of time writing. Here is where Claude changes that — and how to set it up so it actually fits how product work gets done.
What AI actually looks like for a finance team
Finance teams deal with high-stakes, high-precision work. Here is where Claude genuinely helps — and where it has no place.
You've been asked to set up Claude for the company. Here's where to start.
Someone handed you this job. Maybe you volunteered. Either way, you're now the person responsible for getting Claude working for the whole organisation. This is what the first two weeks look like.
What AI actually looks like for an HR team
HR involves a lot of writing, reviewing, and communicating. Here's where Claude saves real time — and where to be careful.
What AI actually looks like for an operations team
Ops has more to gain from AI than almost any other function — but the use cases look different to what most people expect.
What AI actually looks like for a sales team
Not "AI will write your emails." What sales teams are genuinely using Claude for, what works, and the one thing most reps get wrong.
How to actually evaluate whether your AI rollout is working
Most AI rollout evaluations are either too vague ("the team likes it") or too technical (automated test suites that miss what users actually care about). Here's what works.
When Claude starts doing the work: what AI agents look like in practice
An agent isn't just a chatbot that can click buttons. It's a fundamentally different relationship between a human and an AI. Here's what that looks like when it's working.
How marketing teams are actually using Claude
Content is the obvious use case. But the marketing teams getting the most value from AI have figured out something different.
What AI actually looks like in a customer success team
What CS teams are actually using AI for right now — what's working, what isn't, and what nobody tells you before you start.
Using Claude for customer support: what actually works
Customer support is the most common first AI use case for a reason — and the place where the most teams get burned. Here's what a working implementation looks like, and what the common shortcuts miss.
Clear explanations of the ideas behind the tools.
Claude Code routines: automate workflows on a schedule
A routine is a Claude Code automation that runs on a schedule, from an API call, or in response to a GitHub event — without your laptop open. Here's what they are and where they fit best.
Advisor tool: pairing a fast executor with Opus-level judgment
The Advisor tool runs a fast model for most tasks while escalating complex decisions to Opus — combining speed and depth in a single agent workflow.
Monitor tool: real-time background process streaming in Claude Code
The Monitor tool spawns a background process and streams its stdout line-by-line into the conversation without blocking Claude Code's main thread.
Ultraplan: Claude Code's comprehensive planning mode
Ultraplan generates a detailed implementation plan before writing any code — useful for large multi-file changes where a wrong direction is expensive.
Your manager said yes. Now what? The first 48 hours of a Claude rollout
Approval is not adoption. What to do in the window between 'we can use this' and 'we actually use this' — before momentum stalls and inertia wins.
How to get IT to approve your Claude rollout
IT isn't the enemy of your Claude rollout — they're the ally you haven't briefed yet. Here's exactly what IT needs to see before they can say yes, and how to give it to them without triggering a six-month procurement cycle.
Running Claude after the rollout: the ongoing admin workload
Most admin guides cover getting Claude set up. This one covers what happens after: the recurring work, the common incidents, the governance tasks, and what a realistic monthly admin time commitment looks like for a 100-600 person company.
Setting up Claude for your team: the team lead guide
If every person on your team sets up Claude from scratch, they will each get different results and most will give up. Here is how to give your team a consistent starting point — whether you have admin access or not.
Your first week with Claude at a new job
Most people's first week with Claude follows the same pattern: one good result, one confusing result, and a vague sense it's not as useful as advertised. Here's how to break that pattern in the first three days.
AI usage policy template (copy, edit, send to legal)
A complete, one-page AI usage policy template for company-wide Claude deployments — with three variants: general company, professional services, and healthcare-adjacent. Fill in the bracketed fields and you have a policy ready for legal review.
How to get a skeptical teammate to actually try Claude
Skeptical colleagues aren't anti-AI — they've usually had a bad experience, heard about one, or watched a tool get mandated without explanation. Here's how to address all three without being annoying about it.
What to actually put in your CLAUDE.md
Everyone says to set up CLAUDE.md. Nobody shows you what to write. Here are four real starting templates — for a solo project, a team backend, an agency client repo, and an ops/admin setup — with annotations explaining what each section does.
CLAUDE.md vs. hooks: instructions Claude follows and rules it cannot break
CLAUDE.md tells Claude what you want. Hooks make certain behaviors guaranteed. Most teams only use one. Here is how to know which one you actually need.
Claude Code for your team: the five decisions that actually matter
Most teams let one developer set up the .claude folder and never discuss it again. These are the decisions you should make together — and what breaks when you skip them.
Claude + Salesforce: what actually works
The Salesforce connector gives Claude read access to your CRM. The workflows that get real use — account research, pipeline summaries, pre-call prep — are ones where the value is in combining what Salesforce knows with what Claude can reason about.
Your Claude prompt isn't working. Here's how to fix it.
Most prompt failures come from one of five fixable problems. Here's a diagnostic framework for figuring out what went wrong — and how to fix it without starting from scratch.
How to reduce Claude hallucinations: a practical checklist
Hallucination — Claude confidently stating something that isn't true — is the failure mode that kills trust fastest. Here's exactly how to minimize it in practice.
Claude for CRM workflows: using Claude with HubSpot
Claude doesn't integrate natively with HubSpot, but the right workflows make it your best CRM assistant. Here's how sales and CS teams actually use Claude alongside HubSpot.
Automating workflows with Claude and Zapier
Zapier connects Claude to almost any tool you already use — without writing code. Here are the automations worth building, how to set them up, and what to watch for.
Setting up Claude in Slack: what to configure and what to skip
Claude works natively inside Slack through a built-in integration. Here's what it actually does, how to set it up properly, and whether it's worth using for your team.
Using Claude to declutter your digital life
Your Downloads folder, your desktop, your bookmarks, your subscriptions — the digital clutter that accumulates until your computer feels like a junk drawer. Here's how Claude helps you actually deal with it.
How to give Claude precise instructions when using connectors
Vague instructions get vague results. The difference between "find me that doc in Notion" and an instruction that actually works — every time.
What executives actually need to know about AI at work
You're not implementing it yourself — but you're approving the budget, setting expectations, and accountable for the results. Here's the executive-level view: what to ask, what to expect, and where things go wrong.
How to sequence a Claude rollout across multiple teams
Rolling out to 10 teams at once is a recipe for chaos. Here is the sequencing strategy that works — who to start with, how to expand, and what to do when a team is struggling.
Security and privacy for Claude admins: what you need to know
Before you roll Claude out to your team, you need to understand what Anthropic does with your data — and what your responsibilities are. Here is the non-alarmist version.
MCP for operators: what it means and when you need it
MCP is the plumbing that lets Claude connect to anything. Here is what operators need to understand — and when it becomes relevant for your organisation.
Claude Artifacts: what they are and when to use them
Artifacts let Claude produce standalone outputs — documents, code, charts — outside the chat. Here is when they matter and how to get the most from them.
When to use Extended Thinking — and when not to
Extended Thinking gives Claude time to reason through hard problems before answering. Here is what it is actually good for, and what it adds over standard Claude.
How to get the most out of Claude Deep Research
Deep Research works differently from a search engine. Here is what it actually does and how to use it well.
How Claude Memory actually works — and how to use it
Memory lets Claude remember things across conversations. Here is what it remembers, what it forgets, and how to make it useful for team work.
The context window in practice: what it means for how you work
The context window shapes what Claude can and can't do in any given conversation. Here is how to work with it.
The human side of rolling out AI at your company
Getting Claude configured is the easy part. Getting people to actually change how they work is harder. Here is what that looks like when done well.
Prompt engineering for operators: what actually matters
Most "prompt engineering" advice is either too academic or too simplistic. Here is the practical version — the five things that reliably improve outputs.
How to actually measure the ROI of Claude at your company
The problem with AI ROI isn't that it's hard to measure — it's that teams measure the wrong things. Here's what to track instead, and how to present it in a way leadership will actually act on.
How to write a system prompt that actually works
The system prompt is the most powerful thing you control in Claude. Most people write them once, poorly, and wonder why outputs are inconsistent. Here's the method.
How to write an AI usage policy your team will actually follow
Most AI usage policies are either too vague to be useful or so restrictive they get ignored. Here is the framework that actually works.
How to structure Claude Projects across your whole organisation
One Project per team function is the right starting point. Here is the architecture that works at scale — naming, ownership, system prompt governance, and what to avoid.
Which Claude plan is right for your organisation?
Free, Pro, Team, or Enterprise — here's how to think through the decision, what you actually get at each tier, and when upgrading makes financial sense.
When to use Deep Research and how to get the most from it
Deep Research is a different tool from web search — it runs longer, uses more tokens, and is worth it for specific questions. Here is when to use it.
Claude Memory: what it remembers, how to use it, and how to manage it
Claude now remembers things about you across conversations. Here is how it works, what to tell it to remember, and how to keep it useful.
Cowork and Dispatch: Claude working on your computer
Claude can now control your desktop and complete tasks while you do other things. Here is how it works, what it is good at, and what to be careful about.
Connectors: which to enable, which to disable, and why it matters
Connectors give Claude access to your tools. But having all of them on all the time costs tokens and introduces noise. Here is how to manage them.
Claude Skills: what they are, which to enable, and when to use them
Skills give Claude superpowers — web search, code execution, file creation. Here is which ones matter, how to set them up, and when to turn them off.
How to minimise your Claude token usage without sacrificing quality
Tokens are what you pay for. Here are the practical things you can do to use fewer of them — from how you prompt to which model you choose.
Step-by-step: researching a prospect with Claude before a call
A concrete workflow for turning 30 minutes of pre-call research into 5 minutes — without losing the signal that makes a call go well.
How to set up Claude for your whole company
The two ways to give Claude context about your organisation — shared Projects and individual personalisation — and how to decide which fits your rollout.
How tool use works: what happens when Claude calls a function
Tool use is the mechanism that lets Claude take actions — call APIs, run code, search files — instead of just generating text. Here's exactly what happens when Claude uses a tool.
Adaptive thinking: how Claude decides how hard to think
Claude doesn't apply the same effort to every question. Here's what adaptive thinking is, how it works, and why it matters for the outputs you get.
The problem of making AI do what you actually mean
Alignment is the core challenge of AI development: building systems that reliably do what humans intend. It's harder than it sounds, and understanding why helps you build better applications today.
Why Claude starts talking before it's finished thinking
Streaming sends Claude's response token by token as it's generated, instead of waiting until the full response is ready. The difference in perceived speed is significant — and the implementation is simpler than you'd expect.
The unit everything in AI is priced and measured in
Tokens are how language models read and write text — and how every AI API charges you. Understanding them turns abstract pricing into something you can predict and control.
The dial between predictable and creative
Temperature controls how much Claude surprises you. Turn it down for consistent, focused answers. Turn it up for more varied, exploratory ones. Knowing when to do each is a real skill.
When a well-crafted prompt isn't enough
Fine-tuning is how you train a model on your specific data to change its behavior at a deeper level than prompting can reach. It's powerful — and often unnecessary. Knowing which situation you're in saves a lot of time.
How to know if your Claude integration is actually working
Evals are the testing framework for AI — and they work differently from software tests. You're not checking for correct answers. You're measuring behavior across a range of realistic situations.
How Claude reaches beyond the conversation
Tool use is the mechanism that turns Claude from a text generator into something that can actually do things — search the web, run code, query your database, send messages.
Pay for your context once, not every time
Prompt caching is Claude's way of remembering the expensive part of a conversation so you don't have to re-send — and re-pay for — the same context on every request.
Why AI gets confident things wrong — and how to design around it
Hallucination isn't a bug that gets patched. It's a structural feature of how language models work. Understanding why it happens is the first step to building applications that aren't derailed by it.
The engine under everything
A large language model is what Claude is at its core — and understanding how it works changes how you think about everything else in AI.
How to give Claude a memory it doesn't have by default
RAG is the most practical technique in AI engineering — and the most misnamed. It's not magic. It's just giving the model the right pages of the book before it answers.
Why Claude has values instead of just rules
Most AI safety is a list of don'ts. Constitutional AI is the method Anthropic used to teach Claude to reason about right and wrong — the same way you'd want a thoughtful colleague to.
When Claude stops answering and starts doing
There's a clean line between a model that responds to questions and one that takes actions in the world. Understanding that line is the most important thing to know about building with AI right now.
The whiteboard every AI conversation shares
Context window is the single number that shapes everything about how Claude thinks with you — and most people are using only a fraction of it.
How to brief Claude before the conversation starts
The system prompt is where you stop asking Claude to be general-purpose and start making it yours. Most operators underuse it.