← Back to Articles
·8 min read

Discovery isn't dead: why AI makes it harder, not easier

I keep hearing the same claim from product leaders who should know better: "AI will automate discovery." They say it with this confident certainty, like they've already solved the hardest part of product work by plugging in a chatbot.

They haven't. And I think this belief is going to cost a lot of teams a lot of time.

The misconception

The argument goes something like this: AI can synthesize customer interviews faster, parse support tickets at scale, and generate user personas in minutes. So why do we need a whole discovery process? Just feed the data into the model and let it tell you what to build.

If you've done real discovery work, you already see the problem. Discovery isn't a data processing exercise. It's a thinking exercise. The hard part was never reading the transcripts. The hard part is knowing which questions to ask in the first place, recognizing when a customer is telling you what they think you want to hear, and sitting with ambiguity long enough to find the real problem underneath the stated one.

AI can't do any of that. Not yet, anyway, and probably not for a while.

What actually happens when teams skip discovery

I've watched this pattern repeat at three different companies over the past year. A team gets excited about AI tooling, uses it to generate a bunch of "insights" from customer data, builds a feature based on those insights, and then watches it underperform.

The failure mode is always the same. The AI found patterns in what customers said, not in what they actually need. There's a gap between those two things, and closing that gap requires the kind of judgment that comes from direct contact with users. Teresa Torres calls this continuous discovery for a reason. It's not a one-time extraction exercise. It's an ongoing relationship with the problem space.

One team I advised had used AI to analyze 2,000 support tickets and concluded that "navigation" was the top customer pain point. They redesigned their entire nav structure. Usage dropped. When they went back and actually talked to customers, they found that "navigation" was just the word people used when they couldn't find a specific feature that was buried three levels deep. The fix was a search bar, not a redesign.

Where AI actually helps with discovery

I'm not saying throw away your AI tools. I'm saying stop treating them like they replace the thinking part of discovery. Here are the spots where AI genuinely makes discovery better.

Synthesis after conversations, not instead of them. Record your customer interviews, run them through a transcription and summary tool, and use AI to pull out themes across multiple conversations. That's useful. That saves real time. But you still need to have the conversations.

Pattern matching across large datasets. If you have thousands of support tickets or NPS responses, AI can surface clusters you'd miss manually. Use that as input to your discovery process, not as the output.

Generating hypotheses to test. AI is good at producing plausible ideas quickly. Treat those as starting points for experimentation, not as validated insights.

Summarizing competitive research. Scraping competitor changelogs, review sites, and forums is tedious. AI handles this well, and it frees you up for the harder work of deciding what the competitive data means for your product.

The Melissa Perri problem

Melissa Perri wrote about the "build trap" years ago, and her diagnosis still applies. Organizations that treat product management as a delivery function will keep building the wrong things, no matter how sophisticated their tools get. AI doesn't fix organizational dysfunction. It amplifies it.

If your company doesn't value discovery, AI just makes it easier to skip. You'll generate "insights" faster, ship features faster, and fail faster. Faster failure is only useful if you're actually learning from it, and most teams aren't set up for that.

The product operating model that Perri and Marty Cagan advocate for requires something AI can't provide: a genuine understanding of why discovery matters and the organizational patience to do it properly. That's a leadership problem, not a tooling problem.

What I'd actually recommend

If you're a PM trying to figure out where AI fits in your discovery practice, here's where I'd start.

Keep your weekly customer touchpoints. Don't reduce them because AI is "handling" customer insights. If anything, increase the frequency. The AI-generated summaries should make you more curious, not less.

Use AI to prepare for conversations, not to replace them. Before an interview, have AI pull together what you know about that customer from support tickets, usage data, and past interactions. Walk in with context, not assumptions.

Be skeptical of AI-generated personas and journey maps. They look polished. They read well. And they're built from statistical averages, which means they describe nobody in particular. Real personas come from real pattern recognition across real conversations.

Build a "discovery stack" where AI handles volume and humans handle judgment. Let machines process the data. Let people decide what the data means.

The uncomfortable truth

The PMs who will thrive in an AI-heavy product world aren't the ones who learn to use AI tools fastest. They're the ones who get better at the things AI can't do: building relationships with customers, developing product intuition through repeated exposure to real problems, and making decisions with incomplete information.

Discovery isn't dead. It's more important than it's been in years. The teams that understand this will build products that actually matter. The teams that automate it away will build faster, ship more, and wonder why nobody cares.


This article is part of a series on product management in an AI-transformed landscape. The ideas draw on frameworks from Teresa Torres' continuous discovery habits, Melissa Perri's product operating model, and Marty Cagan's work on empowered product teams.