AI Codex
Build with AIStep 4 of 9
← Prev·Next →
Business Strategy & ROIHow It Works

Build, buy, or prompt: the early-stage AI stack decision

In brief

Most founders overcomplicate this. For 90% of early-stage AI products, the right stack is simpler and cheaper than you think — and fine-tuning your own model is almost never the answer.

6 min read·Build vs. Buy

Contents

Sign in to save

There is a version of this conversation that founders have with themselves that sounds like: "Should we fine-tune our own model, or use an existing API?" And there is the version they should be having: "What problem are we actually solving, and what is the cheapest way to test whether we can solve it?"

Those are different conversations. The first one is about technology. The second one is about the business. At the earliest stage, almost every technical decision is really a business decision in disguise.

Here is the framework for making this decision clearly, without letting the interesting technical question crowd out the important business one.

The three options

When founders say they are "building an AI product," they usually mean one of three things — and the distinction matters enormously for cost, speed, and what you can actually ship.

Option 1: Claude.ai (or equivalent consumer product)

You are using Claude through a web interface. No API, no code, no infrastructure. Your "product" is the workflow you design around the tool, not a piece of software.

This sounds like a toy. It is not. For a solo founder at the earliest stage — talking to customers, learning what the real problem is, figuring out if your insight is correct — this is the right setup. You can build and test an entire workflow, demonstrate value to customers, and iterate in days, not weeks.

The constraint: you cannot give this to customers directly. It is your tool, not their product.

Option 2: Claude API (or equivalent)

You are calling the API from code and building a product layer on top. Customers interact with something you built, not directly with Claude. You control the experience, the prompts, the data flow.

This is the right move when you have figured out what you are building and you are ready to give it to people. Not before. The overhead of building a real product — auth, data handling, error states, billing, the dozen things that are not the AI — is significant. Every week you spend building infrastructure is a week you are not learning from customers.

The signal that you are ready to go here: you have run the workflow manually enough times that you know exactly what good output looks like, and you know how to prompt for it reliably.

Option 3: Fine-tuned or custom model

You are training or fine-tuning a model on your data. This is expensive, slow, requires ML expertise, and for most early-stage products, it is solving a problem you do not yet have.

The case for doing this is real but narrow: you have a use case that requires performance that frontier models cannot deliver, you have enough proprietary data to train on, and you have validated with customers that the product is worth building. If all three of those are true, fine-tuning might make sense. If any one is missing — especially the third — you are building a technical solution to a business problem you have not solved yet.

The cost reality

Let's be concrete about what the Claude API actually costs at early stage.

A typical customer interaction — a few hundred words in, a few hundred words out — costs roughly one to three cents at current API pricing. A power user doing fifty interactions a day is costing you a dollar or two per month. Your first hundred active users are costing you a couple hundred dollars a month in API costs.

This is not your biggest problem. Your biggest problem is that you do not have a hundred active users yet. By the time API costs become meaningful, you should have a business that can bear them.

The founders who spend significant time optimizing API costs at the early stage are optimizing the wrong thing. Optimize for learning speed instead.

The real question: what are you actually building?

The build/buy/prompt decision is downstream of a more fundamental question: what is your product, really?

There are three honest answers:

The product is the AI itself. You are making the AI dramatically better at a specific domain — medical diagnosis, legal document review, code generation for a specific framework. The AI is the thing. This is where fine-tuning or custom models might eventually make sense, but even here, you should start with a prompt-engineered frontier model and see how far it gets you before going custom.

The product is the workflow around the AI. You are connecting AI to specific data sources, building the right interface, handling the edge cases, making it usable by people who would never go near a raw API. The AI is excellent already — you are making it accessible and reliable for a specific context. This is the Claude API path.

The product is the insight, not the implementation. You have a specific view on a market, a customer relationship, a distribution channel — and AI makes it possible to act on that insight at scale. The AI is a component, not the product. This is where many of the best early-stage AI companies actually are: the moat is the customer relationship or the data or the distribution, and the AI is what makes it economically viable.

Knowing which of these you are building changes your priorities completely. If you are building the first, you need ML expertise. If the second, you need product and engineering. If the third, you need customers and a distribution strategy.

Decision framework

Answer these questions in order:

  1. Do you know what good output looks like? If you cannot define what a successful output looks like, you are not ready to build anything. Spend more time with customers.

  2. Can you get there with a well-crafted system prompt? Try this first. Frontier models are remarkably capable with good prompting. Most teams discover they can get 80% of the way there without any code at all.

  3. Do you need to put this in customers' hands? If yes, you need the API. If you are still testing whether the idea works, you do not.

  4. Is the performance gap real? If frontier models with good prompting cannot deliver the quality you need, and you have validated that customers care enough to pay for the improvement, custom models become a real conversation. Not before.

The honest answer for most early-stage founders: start with Claude.ai, move to the API when you have ten customers who want the product, and do not think about custom models until you have a real business that depends on performance you cannot get from the API.

The fastest path to product-market fit is not the most technically sophisticated one. It is the one that lets you learn the fastest.

Further reading

Next in Build with AI · Step 5 of 9

Continue to the next article in the learning path

Next article →

Weekly brief

For people actually using Claude at work.

What practitioners are building, the mistakes worth avoiding, and the workflows that actually stick. No tutorials. No hype.

No spam. Unsubscribe anytime.

What to read next

All articles →