Stop shipping features. Start changing behavior.
I want to share something that changed how I think about product management. It's not a framework or a tool. It's a question: "What behavior are we trying to change?"
Not "What are we building?" Not "What's on the roadmap?" Not even "What problem are we solving?" But specifically: what human behavior should be different after we ship this?
Most PMs I work with can't answer that question clearly. They can describe the feature. They can explain the customer pain point. They can point to the Jira tickets. But when pressed on what specific behavior change they're designing for, things get vague.
That vagueness is where most product failures hide.
Features are outputs. Behaviors are outcomes.
This distinction isn't new. People have been talking about outcomes over outputs for years. But I think the language of "outcomes" has become so abstract that it's lost its teeth. "Improve retention" is an outcome. "Increase NPS" is an outcome. They're also so broad that they don't guide product decisions in any useful way.
Behavior change is more specific. "Users save their work at least once during their first session" is a behavior. "Sales reps review AI-generated insights before their first customer call each morning" is a behavior. "New customers complete the integration setup without contacting support" is a behavior.
When you frame your product work in terms of behavior change, several things get clearer. You can observe whether the behavior is actually happening. You can design the product experience to encourage it. You can measure it. And you can tell the difference between "the feature works" and "the feature matters."
Why this is particularly relevant now
In an AI-augmented product world, the gap between shipping features and changing behavior is growing. AI makes it easier to build and ship things. That means teams are producing more features faster. But shipping faster doesn't mean you're creating more value.
I've seen teams use AI to accelerate their development cycle from monthly releases to weekly releases. They're shipping four times as much. Their metrics haven't moved. Why? Because the additional features aren't changing how anyone uses the product. They're just adding more stuff to an already crowded interface.
The teams that are using AI effectively aren't building more. They're building differently. They're using AI to understand user behavior better, to test hypotheses faster, and to personalize experiences. The AI isn't accelerating feature delivery. It's accelerating learning.
How to think in behaviors
Here's the process I use with teams. It starts with the current state: what are users doing today that we want to change? Not what they say they want. Not what we wish they'd do. What are they actually doing, right now?
Then the desired state: what do we want them to do instead? Be specific. "Use the product more" isn't specific enough. "Check the analytics dashboard at least twice per week" is specific. "Share reports with at least one colleague per month" is specific.
Then the barrier analysis: why aren't they already doing the desired behavior? This is where the product insights live. Maybe they don't know the dashboard exists. Maybe it takes too long to load. Maybe the data isn't trustworthy. Maybe they don't understand how to read it. Each barrier suggests a different solution.
Finally, the intervention design: what's the minimum product change that could overcome the barrier? Notice I said minimum. Not "what feature should we build?" but "what's the smallest thing we could do to move the behavior?"
A real example
A team I worked with was building an AI-powered feature that generated weekly performance summaries for managers. The feature worked well technically. It pulled data from multiple sources, synthesized it into readable summaries, and delivered them every Monday morning.
Adoption was low. About 15% of managers opened the summary each week, and even fewer took any action based on it.
When we applied the behavior change lens, here's what we found.
Current behavior: Managers check metrics ad hoc when something feels wrong, usually triggered by a complaint or escalation. They don't have a regular rhythm of reviewing team performance data.
Desired behavior: Managers review their team's performance data weekly and have at least one data-informed conversation with a team member based on what they find.
Barriers: The summary arrived Monday morning when managers were in meetings. The format was a wall of text that took 10 minutes to read. The data was accurate but didn't highlight what needed attention. And managers didn't know what to do with the information even when they read it.
Minimum interventions: Send the summary Friday afternoon instead of Monday morning. Replace the narrative format with a "three things that need your attention" format. Add suggested conversation starters for each item. Make it scannable in under 60 seconds.
The team made these changes in about two weeks. Adoption went from 15% to 52%. The behavior change wasn't "get managers to read a report." It was "get managers to have data-informed conversations with their team every week." The report was just the vehicle.
Connecting behavior change to business metrics
The question I always get: "This is nice, but how do I connect specific behavior changes to the business metrics my leadership cares about?"
You build a behavior chain. Start with the business metric (revenue, retention, whatever leadership tracks) and work backward to the user behaviors that drive it. Then go one level deeper to the product interactions that enable those behaviors.
For example: customer retention (business metric) depends on customers getting regular value from the product (behavior), which depends on them completing their workflow at least three times per week (product interaction), which depends on them understanding how the workflow fits their existing process (enabler).
Each link in the chain is a hypothesis you can test. And the behavior level is where you have the most direct influence as a product team. You probably can't control retention directly. But you can design experiences that encourage the behaviors that drive retention.
What this means for how you write PRDs
If you buy this framing, it changes how you specify product work. Every PRD or spec should include a section that answers these questions.
What is the current behavior we're trying to change? Be specific. Include data if you have it.
What is the target behavior? Describe it in observable terms. If you can't observe it, you can't measure it, and you can't tell if your feature worked.
What barriers exist between the current and target behavior? List them and describe how you'll address each one.
How will we know if the behavior changed? Define the metric and the threshold. "50% of users do X within 30 days of launch" is a clear success criterion.
This is harder than writing a feature spec. It requires you to understand your users deeply, have opinions about what they should be doing differently, and commit to a measurable claim about the impact of your work. That's the hard work of product management. AI can help you do it faster, but it can't do it for you.
This article is part of a series on product management in an AI-transformed landscape.