← Back to Articles
·11 min read

Metrics That Matter: Choosing the Right Success Indicators

The question of which metrics to track is one of the most consequential decisions a product team makes. The wrong metrics create perverse incentives, obscure real user value, and produce roadmaps optimized for the wrong outcomes. This paper examines the principles underlying metric selection, the common failure modes, and a structured approach to building a measurement system that reliably surfaces meaningful signal.

1. The Metric Selection Problem

Metrics serve two distinct functions that are frequently conflated: goal-setting (what outcome are we driving toward?) and health monitoring (is the system behaving as expected?). Conflating these functions leads to a single metric being asked to do too much work—and ultimately doing neither job well.

The most damaging version of this failure is Goodhart's Law in action: when a measure becomes a target, it ceases to be a good measure. Teams that optimize relentlessly for a single metric—daily active users, for instance—tend to find clever ways to move the metric without creating genuine user value. Streaks, notifications, and dark patterns increase DAU. They do not increase user success.

2. A Hierarchy of Metrics

A well-designed measurement system has three levels, each with a distinct role:

  • North Star Metric: One metric that represents the core value the product delivers to users at scale. It should be a leading indicator of long-term business health, not a lagging financial metric. For a productivity tool, this might be "tasks completed per active user per week." For a marketplace, it might be "successful transactions." The North Star should be nearly impossible to game without creating genuine value.
  • Input Metrics: Three to five metrics that represent the key drivers of the North Star. These are the levers the team directly controls. If the North Star is "tasks completed per user," input metrics might include feature adoption rate, time-to-first-task, and collaboration rate. Teams should build roadmaps around moving input metrics.
  • Health Metrics: Guardrails that ensure optimization of input metrics does not come at the expense of other important dimensions—performance, reliability, user satisfaction, support volume. Health metrics are not optimized; they are monitored. If a health metric degrades, it triggers investigation regardless of what is happening with input metrics.

3. Common Failure Modes

Several metric failure modes appear repeatedly across organizations of different sizes:

  • Vanity metrics: Page views, registered users, and app downloads feel good but rarely correlate with user success or business health. They are easy to move and easy to misinterpret. Replace them with metrics that require genuine value exchange—retained users, completed core actions, revenue generated.
  • Metric proliferation: Teams that track everything effectively track nothing. When every team meeting involves a different set of metrics, organizational attention fragments and accountability diffuses. Fewer metrics, well understood, produce better decisions than many metrics, poorly understood.
  • Lagging indicators as leading indicators: Revenue and churn are important metrics, but they are lagging indicators—they tell you about the past, not the future. By the time churn spikes, the retention problem is months old. Build a measurement system that includes early warning signals: engagement frequency, feature breadth, support contact rate.

4. Defining Metrics Rigorously

A metric is not defined until four things are specified: what is being counted, how it is being counted, over what time window, and for what population. "Engagement" is not a metric. "The percentage of users who completed at least three core actions in their first seven days after account creation" is a metric. The specificity is not pedantry—it prevents the measurement disagreements that routinely derail product reviews and planning cycles.

5. Connecting Metrics to Decisions

Metrics only create value when they change decisions. For every metric in the tracking system, the team should be able to answer: "If this metric moves significantly in either direction, what would we do differently?" If the answer is "nothing," the metric should be removed. The discipline of this question—applied periodically to the full metric set—keeps measurement systems lean and decision-relevant over time.

Conclusion

Metric selection is a strategic act, not a technical one. The metrics a team chooses to track encode a theory of what creates value and how success should be defined. That theory should be made explicit, debated, and updated as the product and market evolve. Teams that treat metrics as fixed artifacts rather than evolving hypotheses tend to find themselves optimizing confidently toward the wrong goals.