3iap Metric Design Guidelines

Metrics benefit from user-centered design thinking just like any other medium. For data to influence decisions or behaviors within an organization, it needs to play nicely with the people who are impacted. This starts with how the KPIs are chosen, balanced and articulated.

Good metrics are:

  1. Outcome focused. Good metrics reflect the world you want to see. They should have a strong relationship with whatever it is you want to achieve. This can be easier said than done; developing metric discipline is difficult because it forces organizations to be precise about what they want to achieve. But with that clarity comes focus and alignment.
  2. Balanced. North-star doesn’t mean “only star;” metrics work best in sets. There are no outcomes worth chasing that can be described in a single number. For the same reason that cars have speedometers and windshields, it takes multiple signals to navigate toward a target end-state.
    • Unapologetically Prioritized. There’s nothing wrong with competing priorities or tension between metrics. However, there should be a clear precedent to resolve conflicts and the more explicit the better.
    • Responsibly Complete. In addition to tracking primary metrics, decision makers need visibility into second-order metrics they need to sustain (e.g. do sales of product A cannibalize sales of product B?), appropriate decompositions (e.g. median household income and household income quintiles tell different stories) and externalities to manage (e.g. newsfeed user engagement v.s. user depression and radicalization).
  3. Understandable. Good metrics are easy to relate to the underlying activities that drive them. They use language, concepts, scales and units that are familiar to users. Numbers, by nature, are abstract. Sometimes complexity can’t be avoided. But be mindful of the costs (loss of comprehension, memorability, teachable moments, alignment, emotional connection, etc).
  4. Fair. You can’t hold the weatherman accountable for the rain; no matter how big of a bonus you offer, there’s nothing they can do to influence the outcome. Good metrics are:
    • Controllable. If a person lacks the ability or agency to influence a metric, incentivizing them or holding them accountable against the metric will backfire. For metrics that aren’t directly controllable, but influenceable in aggregate (e.g. sales) balance outcome metrics with process metrics (e.g. # of touches).
    • Comparable. If the degree of control varies between groups or time periods, metrics should be normalized to account for the variability (e.g. the Red Lobster in Times Square will always outsell the Red Lobster in Allentown, comparing them based on sales volume will only create resentmentful employees in Allentown).
  5. Robust. Metrics give teams direction without being prescriptive. They communicate the “what” without dictating the “how.” When it works, it works well. Teams can self-organize and bring their own creative solutions to achieve the target outcome. But there’s a Faustian challenge: Getting what you asked for doesn’t necessarily mean getting what you want. So metrics need to be defined to avoid mischief.
    • Counterbalanced. 1990s country singer Daryle Singletary was wrong. Cars can be too fast. People can have too much money. And if you have 14 people in the back of a pickup truck, you are in fact having too much fun. Mindlessly increasing the volume of anything can invite unintended consequences, so most metrics need counterbalances to provide guardrails (e.g. revenue v.s. profit, sales v.s. conversion, growth v.s. retention).
    • Humane. If people in your fulfillment centers are peeing in bottles to maintain their quotas, if your branch managers encourage bank employees to create fraudulent checking accounts, or your delivery drivers start running red lights when they’re running late, you might consider bounding or building in forgiveness.
    • Light-touch. Heavy handed metrics invite mischief by attempting to quantify behavior that’s better driven by cultural norms. This includes metrics that signal lack of trust (e.g. employee mouse activity monitoring), exert granular control (e.g. measuring lines of code) or that are closely tied to incentives (e.g. teacher bonuses for test scores).
  6. Economical. Measurement isn't free. Even defining metrics requires time and energy. Don’t track things that don’t lead to uncertainty reduction. Don’t track things the hard way when simple estimates will suffice. Don’t strive for precision you don’t need.
a curious guinea pig
Would you like to be a guinea pig?

Join 3iap’s mailing list for early access to the latest dataviz research, writing, and experiments.

(Note: No guinea pigs –or humans– have been harmed in the course of 3iap’s research, writing, or experiments.)