Product strategy for AI features: when to build, when to buy, when to wait
Every product leader I talk to right now is dealing with the same pressure: "We need an AI strategy." The board wants it. The CEO mentioned it in the all-hands. Competitors are shipping AI features. And the PM is sitting there trying to figure out which of the hundred possible AI features would actually make their product better.
Most of them will get this wrong. Not because they're bad at their jobs, but because the decision framework they're used to doesn't account for how fast the underlying technology is moving.
The speed problem
Traditional build-vs-buy decisions assume the technology landscape is relatively stable. You evaluate the options, make a decision, and execute. The landscape might shift over 12-18 months, but not dramatically.
AI doesn't work that way. The capabilities available six months from now will be meaningfully different from what's available today. A feature that requires custom model training today might be available as an API call next quarter. An integration that seems like a competitive advantage right now might become table stakes by the time you ship it.
This creates a third option that product teams aren't used to considering: wait. Not wait because you're indecisive, but wait as a strategic choice because the cost of building the same capability will be dramatically lower in six months.
The problem is that "wait" feels passive, and product teams hate feeling passive. So they build, often too early, and end up maintaining custom infrastructure for something that's now available off the shelf.
A framework for the decision
I've been using a modified version of the classic build-buy framework that adds the time dimension. It's not perfect, but it's helped teams I work with avoid the most common mistakes.
Build when the AI capability is core to your product's differentiation and requires deep integration with your proprietary data. If the AI feature is the reason customers choose you over competitors, and it needs to work with data that only you have, building makes sense. This is true even if the underlying models improve, because your advantage comes from the data and integration layer, not the model itself.
Buy when the capability is important but not differentiating, and good solutions already exist. AI-powered search, summarization, content generation, and basic analytics are increasingly available as APIs and SDKs. If you're adding these to your product as table-stakes features, buying is almost always the right call. Your team's time is better spent on the things that make your product unique.
Wait when the capability would be valuable but current solutions require significant custom work that's likely to be simplified soon. This is the hardest call because it requires predicting the technology trajectory, and nobody's great at that. But some signals help: if major AI labs are actively working on the problem, if there are already early solutions that are 60-70% of the way there, and if your customers aren't demanding it yet, waiting is probably the right move.
Common mistakes I keep seeing
Mistake 1: Building AI features to check a box. A company adds an AI chatbot to their B2B SaaS product because "everyone's doing it." The chatbot can answer basic questions about the product, but customers already have documentation, support tickets, and account managers for that. Nobody uses the chatbot. The team spent three months building something that didn't solve a real problem.
The fix: start with the customer problem, not the technology. "Our customers waste 20 minutes finding the right report" is a problem worth solving. "We need AI in our product" is not.
Mistake 2: Building when you should buy. A team builds custom document summarization using fine-tuned models because the off-the-shelf APIs weren't quite good enough six months ago. By the time they launch, the APIs have improved to the point where they're comparable to the custom solution, and the team is now maintaining infrastructure they didn't need.
The fix: before committing to a build, ask "What would need to be true for a buy option to work?" Then check if those conditions are likely to be met in the next 6-12 months.
Mistake 3: Treating AI features like regular features. A team scopes an AI feature the way they'd scope any other feature: fixed requirements, defined acceptance criteria, estimated timeline. But the AI component behaves probabilistically. It works 90% of the time in testing but fails on edge cases in production. The team spends twice as long on quality assurance as they planned.
The fix: scope AI features with explicit quality thresholds and failure modes. Define what "good enough" looks like before you start building, and plan for the monitoring and iteration that probabilistic features require.
The proprietary data advantage
One principle I keep coming back to: your lasting advantage in AI features almost never comes from the model. Models are improving fast and commoditizing. Your advantage comes from your data.
If you have proprietary data that makes AI features work better in your specific context, that's worth investing in. A CRM company that has ten years of sales conversation data can build AI coaching features that no general-purpose tool can match. A logistics company with historical route optimization data can build forecasting that outperforms generic solutions.
The strategic question isn't "Can we add AI?" but "What data do we have that would make AI features uniquely valuable in our product?" If the answer is "nothing that isn't available elsewhere," you should probably be buying, not building.
Timing your investments
Here's a rough heuristic I use for timing AI investments.
If the technology is mature and widely available (text generation, basic classification, summarization), act now. The window to differentiate on implementation quality is closing. Buy or build quickly and focus on the user experience and integration quality.
If the technology is emerging but improving fast (multimodal understanding, agentic workflows, complex reasoning), build prototypes to learn, but don't commit to production-grade infrastructure yet. Run experiments. Test with real users. Build organizational knowledge. But keep your investment reversible.
If the technology is early-stage or speculative (reliable multi-agent systems, domain-specific reasoning at expert level), watch and learn. Read the research. Talk to vendors. Understand what's coming. But don't bet your roadmap on it.
The teams that do well here are the ones that maintain a portfolio approach: some bets on mature technology that ship soon, some experiments with emerging capabilities, and some awareness of what's coming next. They avoid going all-in on any single wave.
What this means for your roadmap
If you're building your AI product strategy right now, I'd suggest framing it as a portfolio of bets across these three timing horizons. Put 60% of your AI investment into mature technology that solves known customer problems. Put 30% into experiments with emerging capabilities. Keep 10% for exploratory research and learning.
And be willing to shift those percentages as the landscape evolves. The teams that win here won't be the ones who made the best prediction about where AI is going. They'll be the ones who built the organizational capability to adapt quickly when the technology shifts.
That's the real strategy: not picking the right AI feature to build, but building a team and process that can keep making good AI decisions as the technology evolves.
This article is part of a series on product management in an AI-transformed landscape.