← Back to Articles
·9 min read

Continuous discovery when your users can't articulate what they need

There's a moment in every customer interview that experienced PMs recognize: the user pauses, looks slightly frustrated, and says something like "I just want it to be... better." Or "I don't know exactly what I want, but this isn't it."

That moment used to be a dead end. Now it's the whole ballgame.

As products get more complex and AI introduces capabilities that users haven't encountered before, the gap between what people can articulate and what they actually need is widening. Traditional discovery techniques were designed for a world where users could describe their problems, even if they couldn't design solutions. We're moving into a world where users can't always describe their problems either.

Why articulation is getting harder

Several things are converging to make user articulation more difficult.

Products are doing more. When a tool does one thing, users can tell you if it's doing that thing well. When a tool does fifty things and uses AI to connect them, users struggle to isolate what's working and what isn't. They have a general feeling about the experience, but they can't pinpoint the specific friction.

AI introduces unfamiliar capabilities. When you show someone an AI feature they've never seen before, they don't have a mental model for evaluating it. They can't tell you if the summarization is good because they've never had automated summarization. They can't tell you if the recommendation is helpful because they don't know what alternative they'd compare it to. The reference point doesn't exist yet.

Expectations are shaped by consumer AI. Your users are interacting with ChatGPT, with AI features in their phone, with generative tools in consumer products. These experiences create expectations that they can't always articulate. "Why isn't your AI as good as ChatGPT?" is a question that reveals a mismatch in expectations without surfacing the actual need.

Workflow problems are hard to self-diagnose. Many of the most valuable product improvements address workflow inefficiencies that users have adapted to. They've worked around the problem for so long that they don't see it as a problem anymore. Asking them "What's frustrating about your workflow?" misses these adapted-to problems entirely.

Discovery techniques that work below the surface

Teresa Torres' continuous discovery framework is still the right foundation here, but some of the specific techniques need to be adapted for this context. Here's what I've found works when users can't tell you what they need.

The most effective approach often involves observation over interviews. Watch people work. Actually sit with them (or screen-share) and observe their workflow for 30-60 minutes. Don't ask them to narrate. Just watch. You'll see workarounds, repeated actions, context switching, and dead ends that they'd never mention in an interview because they've normalized them.

I watched a customer success manager spend 12 minutes every morning copying data from one tool into another to prepare for her daily standups. When I asked her about pain points in her workflow, she didn't mention it. It was invisible to her. When I pointed it out, she said "Oh yeah, I've just always done that."

That 12 minutes times 250 workdays is 50 hours per year. Multiply that by every CS manager in the company. That's the kind of problem you find through observation that you'd never find through interviews.

When users can't imagine a capability, move beyond concept testing and try prototype testing instead. Build quick prototypes of AI features and put them in front of users. Don't ask "Would this be useful?" Ask "When would you use this? Walk me through your day and show me where this would fit." The difference matters. "Would this be useful?" gets you polite agreement. "Walk me through where this fits" gets you honest reactions. If they can't find a natural place for it in their workflow, the feature doesn't solve a real problem, regardless of how impressive the technology is.

Another powerful technique uses behavior analytics as discovery input. Your product analytics tell a story about what users are actually doing, which is often different from what they say they're doing. Look for patterns: where do users drop off? What features do they access but never return to? Where do they switch to a different tool? These behavioral signals point toward problems worth solving. Pair this with session recordings if you have them. Quantitative data tells you what's happening. Session recordings help you understand why.

Finally, comparative testing can replace the challenge of articulating what "better" means. Give users two options and ask which one they prefer. A/B testing does this at scale, but you can also do it qualitatively in interviews. Show two different versions of an AI feature and ask them to compare. People are much better at choosing between options than describing what they want from scratch.

The assumption mapping technique

One approach I've developed specifically for AI product discovery is what I call assumption mapping. It works like this.

Before you talk to users, write down every assumption your team is making about the AI feature you're considering. Be thorough. Common assumptions include: "Users will trust AI-generated recommendations," "Users will understand what the AI is doing," "This saves users meaningful time," "The AI output quality is good enough for the use case," "Users will change their workflow to incorporate this feature."

Then rank these assumptions by two dimensions: how critical the assumption is to the feature's success, and how confident you are that the assumption is true.

The assumptions that are highly critical and low confidence are your discovery priorities. Design your user research specifically to test those assumptions. This is more focused than general discovery because you're not trying to understand the whole problem space. You're trying to reduce the risk of specific bets.

For example, if your team wants to build an AI feature that auto-generates reports for executives, your riskiest assumptions might be: "Executives will trust AI-generated reports enough to share them with their boards" and "The report quality is high enough that executives don't spend more time editing than they save." Testing those assumptions requires putting prototypes in front of real executives and watching how they react, what they edit, and whether they'd actually send the output to their board.

When "just ship it and see" is the right call

I want to be fair about this. Sometimes the fastest path to learning is shipping something and watching what happens. This is especially true when the cost of building is low (AI makes many prototypes cheap), the risk of failure is contained (it's a low-stakes feature that won't damage trust), and you're testing behavior, not just preference.

The key distinction: "ship and see" works when you have clear success criteria and a plan for measuring them. It doesn't work when you're shipping into a vacuum and hoping adoption metrics will tell you something. You need to know what you're looking for.

And "ship and see" should be a complement to the deeper discovery work, not a replacement for it. It's one technique in the toolkit, not the whole toolkit.

Building the discovery muscle

The uncomfortable truth is that most product teams aren't good at the kind of discovery I'm describing. They can run a customer interview. They can send a survey. But observation-based research, prototype testing with real workflows, and assumption-driven experiments require skills that many teams haven't developed.

If that's where your team is, start small. Commit to one observation session per week. Build one prototype per sprint and test it with three users. Write down your assumptions before you build anything and test the riskiest one before committing resources.

The muscle builds over time. And in a world where AI is creating products that users can't yet imagine needing, the teams with the strongest discovery skills will have the biggest advantage.


This article is part of a series on product management in an AI-transformed landscape.