← Back to Articles
·8 min read

Why your AI product roadmap is probably wrong

I've reviewed about two dozen AI product roadmaps in the past year, across startups and established companies. Most of them share the same structural problem: they're built on assumptions about where AI technology will be in 12 months, and those assumptions are almost certainly wrong.

This isn't a criticism of the teams who built them. Predicting AI capabilities on a 12-month horizon is genuinely hard right now. The pace of improvement is non-linear, the cost curves are shifting faster than anyone expected, and every quarter brings capabilities that would have seemed unlikely the quarter before.

But "it's hard to predict" isn't a reason to give up on roadmapping. It's a reason to roadmap differently.

The prediction problem

Most AI roadmaps I see are structured like traditional product roadmaps: a sequence of features with estimated timelines. "Q2: Add AI-powered search. Q3: Build recommendation engine. Q4: Launch AI assistant." The implicit assumption is that the team knows what's technically feasible at each stage and can estimate the effort.

That assumption breaks down with AI for a few specific reasons.

The cost of inference is dropping faster than most teams model. A feature that requires $50,000/month in compute costs today might cost $5,000/month in a year. This changes the ROI calculation for every AI feature on your roadmap, but most teams don't revisit these numbers.

Foundation model capabilities improve in unpredictable jumps. When a new model generation drops, it often makes previous approaches obsolete. Teams that invested months in prompt engineering and fine-tuning for a specific model find that a newer model handles the same task out of the box, better.

The competitive landscape shifts just as fast. The AI feature that felt differentiated six months ago might now be offered by five competitors or available as an API from a platform vendor. Your roadmap needs to account for this, and most don't.

How to roadmap for uncertainty

Teams need direction. Leadership needs to see a plan. The answer is to build roadmaps that account for uncertainty instead of pretending it doesn't exist.

Use time horizons, not timelines. Instead of structuring roadmaps as "Q2, Q3, Q4," try the "Now (committed, in flight), Next (validated problem, exploring solutions), Later (opportunity area, monitoring)" format that Tim Herbig and others have written about. This now-next-later approach works particularly well for AI because it doesn't overcommit to specific solutions in future quarters.

Build in decision points, not just milestones. At the end of each sprint or cycle, include an explicit decision: "Continue on this path, pivot the approach, or stop?" This turns your roadmap from a commitment into a series of options. You're not promising to build a recommendation engine in Q3. You're promising to evaluate whether a recommendation engine is still the right approach given what you've learned and what's changed in the market.

Separate the problem layer from the solution layer. Your roadmap should commit to problems, like "Customers can't find relevant content quickly enough," while keeping solutions flexible. It could be search improvements, recommendations, an AI assistant, or something that doesn't exist yet. When a new capability emerges that solves the problem better than your planned approach, you can adapt without abandoning the roadmap.

Include a technology monitoring layer by dedicating a small amount of team capacity (I suggest 10-15%) to staying current with AI capabilities. Someone on the team should regularly test new models, evaluate new APIs, read benchmarks, and report back on what's changed. This isn't research for research's sake. It's an early warning system that helps you update your roadmap assumptions before they lead you astray.

The quarterly recalibration

Every quarter, before you plan the next cycle, run through these questions.

What AI capabilities have changed since our last planning cycle? New models, new APIs, cost changes, new competitor features. Be specific.

Which of our roadmap assumptions are still valid? If you assumed a certain level of model capability or a certain compute cost, check whether those assumptions still hold.

Are any of our "Later" items now technically feasible? Sometimes capabilities that were speculative last quarter are achievable now. Moving them forward can be a source of competitive advantage.

Are any of our "Now" items no longer differentiated? If competitors have shipped similar features or if the capability is now available as a commodity API, reconsider whether custom building still makes sense.

What have we learned from our experiments? Every AI feature that's in production should be generating data about what works and what doesn't. Use that data to inform the next quarter's priorities.

This isn't just good AI roadmapping. It's good roadmapping, period. The AI context just makes the recalibration more urgent because the landscape shifts faster.

What I've seen go wrong

Three patterns emerge repeatedly that derail AI roadmaps.

The first is the "boil the ocean" approach. A team plans to build an AI-powered everything: search, recommendations, summarization, chatbot, analytics, content generation. All in the same year. They spread themselves thin, and nothing ships at a quality level that moves the needle. The fix is ruthless prioritization. Pick the one AI capability with the highest chance of changing user behavior, nail it, and then move on.

The second is getting "technology-forward." A team gets excited about a specific AI capability (say, RAG or agents), builds a roadmap around it, then goes looking for customer problems to apply it to. This is the build trap in AI clothing. Start with customer problems instead, then evaluate which AI capabilities could address them.

The third is the "one and done" mindset. A team ships an AI feature and moves on to the next one without investing in monitoring, iteration, and improvement. AI features are living systems that need ongoing tuning, quality monitoring, and user feedback loops. Your roadmap needs to include post-launch investment, not just initial build.

A realistic AI roadmap structure

If I were building an AI roadmap today, here's how I'd structure it.

The top layer is the problem portfolio: the three to five customer problems I'm targeting this year, ranked by impact and feasibility.

The middle layer is the solution portfolio: for each problem, the approaches I'm considering (including non-AI approaches), with current evaluation status.

The bottom layer is the capability portfolio: the AI capabilities my team is building or acquiring, mapped to the solutions that need them.

The key insight is that these layers don't have to move at the same pace. The problem portfolio is relatively stable (customer problems don't change quarterly). The solution portfolio shifts as technology evolves and experiments yield results. The capability portfolio changes fastest as new tools and models become available.

By separating these layers, you can update your AI roadmap frequently without losing strategic direction. The problems stay the same. The solutions evolve. And the capabilities flex with the technology.


This article is part of a series on product management in an AI-transformed landscape.