
KPIs often resemble school exams: a formal mechanism to demonstrate compliance to a higher authority, while the original purpose—learning, capability-building, and real progress—quietly fades into the background.
Everyone knows the theory. KPIs should reflect strategy, drive the right behaviors, and support long-term performance. And yet, in reality, they mostly reveal what management actually cares about—and what it avoids changing. “We don’t measure what matters; we manage what we measure, even if it doesn’t matter.”
So KPI discussions feel less like management and more like exam preparation: What do we need to show? What will be accepted? How do we pass—preferably without changing too much?
The problem is not the KPI methodology. The problem is what happens to KPIs once real managers, real pressure, and real reporting cycles enter the picture.
In theory, targets should be ambitious enough to force change. In practice, many KPI targets are set by extrapolating from past performance (look achievable) and adding a bit of extra effort. If a target can be reached by doing more of the same, it is not a strategic KPI—it is a forecast with a bonus attached.
What this produces is not ambition, but optimisation. The equivalent of studying just enough to secure a good grade—without actually mastering the subject.
How it should work is very different. A truly ambitious KPI defines a new level of performance that cannot be reached by continuing existing practices. It assumes that:
That is not exam preparation. That is education. And it is far more demanding than calculating what score is needed to pass.
At some point, the uncomfortable question emerges: what are we actually trying to achieve? Is the goal to pull a metric upward in the short term, or to elevate the operating model to a new level? Management claims to want structural improvement, but behaves as if a short-term uplift in the metric is the real goal.
Under pressure from top management or a parent company, the answer often becomes very clear. The immediate objective is to show improvement. As a result:
Just like in exams, narratives emerge:
Meanwhile, the underlying capabilities—the equivalent of actual knowledge—may remain unchanged. The organisation has passed the exam, but has learned very little.
Not all KPIs measure the same thing, just as not all exams assess real understanding. The choice of KPI type determines whether improvement is superficial or fundamental.
Outcome-type KPIs—revenue, margin, growth—are like final grades. They are visible, intuitive, and easy to report. They move quickly, which makes them attractive under short-term pressure.
Driver-type KPIs are different. They resemble indicators of learning and skill development:
These metrics are harder to influence and slower to change. They do not produce spectacular short-term jumps. Instead, they require sustained effort over time, even multiple years, before a clear trend emerges.
From a management perspective, this is deeply inconvenient. Driver KPIs demand patience, consistency, and tolerance for ambiguity. But they are also the only ones capable of producing fundamental, lasting change.
Some business effects are the result of multiple drivers. The logical response is to create composite KPIs that combine several indicators into a single number. Methodologically, this can be correct. Practically, it often fails.
A KPI that is theoretically perfect but fails to mobilise behavior because it requires explanation is worse than a simpler, imperfect one. Employees cannot “study for” an exam they do not understand. The result is detachment, not engagement.
There is a quiet but important trade-off here:
A slightly imperfect but intuitive metric often changes more than a perfectly designed one that no one can explain without slides.
Some metrics are easy to understand and therefore powerful, even if they measure reality only indirectly. Take value for money. Within the same organization, this can mean:
Which interpretation dominates depends less on methodology and more on organisational culture. If the shared understanding is aligned, the metric can mobilise real improvement—even if it technically measures only part of the story.
Naming also matters more than managers like to admit. Setting a target around “premium brand” versus “premium position” can shift perceived ownership entirely. In the first case, everyone assumes it is marketing’s exam to pass. In the second, the implication is broader, even if the underlying ambition is identical.
Just like in school, the wording of the exam determines who studies—and who checks out.
Even the best-designed exam is useless if it is never reviewed.
Organisations can get everything else right: ambitious targets, thoughtful metrics, clear intent.
Without systematic tracking, consistent review, and real accountability, none of it matters. KPIs turn into ceremonial grades—reported, acknowledged, and quickly forgotten.
In education, learning requires regular feedback. In management, performance improvement requires the same. Without it, KPIs are not management tools; they are reporting artifacts.
KPIs rarely fail because they are poorly designed. They fail because organisations treat them like exams to pass rather than capabilities to build. The irony is that everyone involved usually knows the theory. But under pressure, practice drifts toward what is easiest to explain upward, fastest to improve, and safest to defend.
The result is predictable: good grades, little learning, and no real progress. And just like in school, the problem is not the exam. It is the decision to stop caring about what it was supposed to teach in the first place.
Instead, use KPIs to elevate your operation to the next level.


