Means and Ends: When what matters most is hardest to measure


Guest Blog by Michelle-Joy Low, Ph.D, Head of Data & AI at Reece Group

A revisit of long-time good reads reminded me of Sternberg’s College Admissions for the 21st century, a thorough examination of how standardised college admission tests fail to measure what’s really important. It exemplifies the dark side of the well-publicised mantra of “What gets measured gets managed” (and its countless adaptations). Today, measurement is still touted as the panacea in businesses striving for high performance, despite its well-documented risks of incentivising value-destroying behaviours. For executives overwhelmed by the complexity of modern organisations, it’s understandable why they might ignore these risks and gravitate toward a small number of headline metrics that are easy to measure — usually in dollars (it also makes it easier for the board to calculate their incentives). When challenged on whether a business’s metrics measure what actually matters — like culture, judgement, and intent — many shrug and point to the overly familiar Too Hard Basket™.

Who’s actually footing the bill?

For analytical teams, especially those working through transformation agendas, the risk of the surrogation snare — where the pursuit of a single metric comes at the detriment of an entire company — is greatly amplified. In my view this stems from a unique property of analytical teams: they rarely operate data-producing systems, nor are they the end-consumers of data & insight. Despite controlling neither data production or consumption, they are held accountable to lifting the organisation-wide creation of value from data (e.g. data-enabled decisions, embedding ML in products and processes, etc.).

The distance and abstract relationship between the domain expertise of analytical teams and outcomes in the data value chain is unlike any other part of an organisation; where a Software team may stand behind the quality of features shipped, or a Sales team may be accountable for the performance of a campaign, attribution of an analytical team’s impact isn’t straightforward. Take for instance, data-informed decisions: Who answers for a poor decision made on the back of a financial metric misunderstood? The executive who placed blind faith in “The Numbers”, the analyst who published the report in a rush after hours to meet a deadline, the data team who inherited a monolithic, undocumented data model, or the source system team who remodelled source data without realising its downstream impacts? This example, while contrived, will probably feel familiar and bring with it the looming shadow of the Too Hard Basket™.

 

There are few areas outside of Data & Analytics where accountability rests with a team that holds so little agency for the outcomes they drive.

But good Data Leaders are deeply invested in driving an organisation-wide agenda and (should) have a penchant for numbers. And with that comes an incentive to study how the mantra of “What gets measured gets managed’’ could play out organisationally. While written some time ago, Caulkin’s excellent and still timely article cautions against accepting a status quo that only measures the easily measurable. In it he writes that reducing management concerns to only things that can be easily enumerated hides and dis-incentivises efforts to consider less enumerable, more important matters in need of care. How does this look in the context of Data?

Do the ends justify the (unseen) means?

The data value chain provides a useful backdrop against which to examine the effects of measurement. In short, value creation with data is invariably the product of a cross-company, cross-functional collaborative effort. But that value is most visible, and unfortunately most measurable, at the very end of the chain (“actionable insights” as an overused platitude if I may): be it a forecast converted to cost savings, or a segment-estimate-turned-opportunity use case, it may be tempting to track such ends and only such ends as outcome measures.

It’s not uncommon to see headline metrics in Data team quarterly business reviews like “11 new assets delivered”, or even “15 projects completed”. In more dollar-driven companies, teams may even attempt to quantify the impact of those projects in dollar terms. Yet to take such an approach ignores what it takes to achieve those ends: scalable ingestion & curation patterns, data integrity & ownership disciplines and everything in between. In most data programs, these ‘means’ can easily outweigh the ‘ends’ to the tune of tens of millions per year.

Consider what actually is at stake if a company measures Analyst performance in turnaround times alone: speed of response would be the visible output; with speed-at-all-cost tech debt as invisible impact on Engineering. The same would be true if measuring any other isolated part of the data supply chain for its ‘ends’ — measuring the ends and only the ends creates incentives to justify any ‘means’ to get there. Worse, it erases the value of organisational culture: How much overtime was worked to deliver these outcomes? Were those assets/projects connected to real business value or just cherry-picked vanity metrics? More often than not, organisations don’t stop and ask whether any part of their culture would be reflected in their metrics. If “being collaborative” is important, would an un-collaborative delivery culture be reflected in any of today’s metrics?

What’s actually IN the Too Hard Basket™?

Drawing wisdom from Conway’s Law, one needs to acknowledge how the state of process and communication is going to impact the quality of data deliverables, be they analysis or products. Because of Data teams’ unique position as connectors between producers and consumers of data, consistently delivering high quality data products requires a healthy culture across the entire data value chain, to facilitate the flow of information in both directions. To that end a wise leadership team should seek measurements of behaviour across the entire data value chain (the ‘means’), not just on outputs (the ‘ends’).

Take any scenario, say the delivery of a new ML-backed product — is accountability represented from the product, infrastructure and engineering departments? How often are they meeting to discuss and resolve their competing, multi-domain priorities? Does the executive-level reporting reflect the balance between those priorities? Does a clear basis exist for decision-making between teams, and is it consistently applied?

It can feel hard to get started, swimming in a sea of potential questions and points of measurements. But a good place to start is with the basics of operations: Do teams know how much work is actually needed? Do they know what they’re committing others to? Are they communicating precisely about these asks, or is there sprawling scope creep? These questions are powerful because answering them inherently drives the establishment of channels for receiving and prioritising the smorgasbord of data requests. Combine this with a sweep of those channels for whether teams are engaging respectfully and voila: you’ll have your first measurement of collaboration health.

While data teams are exposed to a large blast radius of such scenarios, realistically these considerations are relevant regardless of domain:

such scenarios are windows into organisational culture — how its values are lived out.

Where measurement plays a role is in how closely measurement systems, in their design and execution, stacks up against what an organisation truly values.

It won’t be easy, but…

In contemplating these relationships between measurement, process and behaviours and, an important observation emerges: a key difficulty in measuring culture comes down to the sheer microcosm of behavioural signal that would need to be picked up in fine-grained detail — things like tone of voice in everyday conversation — the active recording of which would create massive administrative, not to mention ethical overheads. That said, the difficulty of measurement should not be confused with impossibility.

First, leaders must recognise that the lack of action impacts culture all the same — maybe even to a larger extent than if there were intervention. For example, not addressing repeat underperformance, say in the spirit of ‘kindness’, is in fact being terribly unkind to high-performing individuals in the team who have to bear the costs of said underperformance, and over time is almost certain to cause retention issues.

It is important also to understand the additive effect of choice architecture (i.e. making it easy to behave well) inherent in a team’s operating model. If culture is shaped through countless micro-interactions, then let this be through intentional design of the ‘means’ through which these interactions happen. Setting standards around publishing in-flight work and requests, for example, provides transparency on real (and competing) demand on capacity; capturing strong documentation and delivery disciplines builds corporate memory and responsibility for past decisions.

Upholding boundaries for communication builds trust that these boundaries work,

and removes the incentive for “offline” threads to get the job done. Measurements of such ‘means — where the real behaviours happen — far outstrip the effectiveness measuring business outcome ‘endsalone.

Against this backdrop, leaders who want to transform culture must introspect if they’re really willing to do what it takes. Sustaining behaviourally-designed operating rhythms is a tremendous commitment, often requiring structural change and management overhead to generate accountability through both ‘means’ and ‘ends’. Crucially, this accountability must hold, especially understanding that the globally optimal outcome for an organisation often requires compromise of the ‘end’ itself, not just each of its component ‘means’. For example, the needs of one very vocal group today may not be worth the loss of talent in six months’ time — and bringing such accountability to bear is no trivial effort.

The mettle to tackle grand challenges like these is ultimately forged in purpose. If being a Data Leader means more than just building cool, shiny widgets with new technology, then we must embrace our responsibility for helping organisations see their culture in numbers.

About the author:

Michelle is a senior executive with deep expertise in the optimisation of data, AI and human behaviour towards better commercial decisions and societal benefit. She established the enterprise Data & AI practice at Reece Group, and leads the strategy, execution, and culture around Reece ANZ's use of data and intelligent technologies.

This article was first published on reecetech here.