Digital transformation - from what, to what?

The results of a study involving 40 experts on defining digital transformation identified a huge variety, ranging from digitising parts of how something works now, to redesigning it to work radically differently, to wholesale transformation of an entire organisation, and much more. And of course Tom Loosemore’s definition of digital neatly sets the ambition.  

Rather than definitions, lately I’ve been thinking about drivers; to what end we’re doing whatever it is we’re doing, whether we're thinking of it as transformation, digital capability building, service improvement, digitisation or modernisation. I wanted to find the words to be able to summarise 200+ page business cases and large programme documentation into a few sentences that nail the ‘why’. So we can protect the momentum of what’s most important, understand and communicate it - and constructively challenge when that isn't happening.

I analysed (some of the) research, existing wisdom and practice, and combined this with my own experience working with, or talking to, dozens of transformation programmes, projects and teams. Here are a four of the important and common drivers of work:

Effectiveness

What does ‘good’ or ‘success’ looks like for customers and users, or for an organisation, a service, a product or a broader policy. This might include user needs, desired outcomes, goals, policy intent or whatever else constitutes something valuable to those involved. 

Efficiency

This is the cost, time and effort involved. In particular, how much of this is (potentially) avoidable or improvable for the organisation, as well as the people and activities involved, whether that’s staff, customers, users, patients, carers and anyone else. 

Culture, capability and agility

This is what affects our ability to do what’s needed well enough, to be able to anticipate, improve and change in future, and to do this sustainably. This could include the processes, tools and infrastructure we need to raise the game, improve our teams (quality, value, health), our skill sets or ways of working. And likely all of it. 

Risk reduction

This is about tackling what causes the major problems, hampers progress or is downright dangerous. Whether that’s lack of auditable transparency, ‘toxic’ technology, poor security or financial practice. 

So what?

To make this more practical, Matti Keltanen and I developed a few examples of what could make sense to look at for each category, to move closer to something a team could use. 

Effectiveness

Presumably much of what we do should be helping our organisations better achieve their purpose, and to help our customers and users be more successful in terms of what they need or want to do. 

That might include

  • improving the rate or proportion of events, cases, situations or transactions that end in ‘success’, at least for the ‘yes/no’ type of outcomes

  • achieving longer term impact in terms of organisational mission or policy intent

  • asking users ‘did you successfully do what you came here to do today’? Or ‘which parts of this guidance are clear and which parts are not clear?’ Other revealing questions to ask or self-report could include: ‘How well do you feel cared for?’ ‘How confident are you that the right thing is happening?’ 

For very new services, products or processes, sometimes the fact that people can actually do something for the first time is the goal, initially - so, the occurrence, frequency or volume. But this is really about the means, rather than the end. So after 'getting it out there', the focus should swiftly move on to success rate and other measures that get to the heart of what’s driving the work - and ideally while there's still time to change course. 

Efficiency

Operating cost is important at a macro level but transformation efforts need to focus on practical and specific steps towards this. So for a team, a process, product or a service, the desired change might be to improve: 

  • the cost, time and effort involved and; 

  • how much of this is (potentially) avoidable or improvable. 

Other indicators of efficiency could relate to wider cultural change and shifts in use of digital and technology. For example

  • the number of sales people involved per deal closed 

  • the demand for customer services compared to customer service people

  • and the equivalents of these for the ‘explore, design, build and change work’ that most service organisations now do  

Culture, capability and agility 

This category is about the direction and capability needed if the end goal is to improve the two themes above. For example, building in-house ability to do more digital work without relying on external suppliers. Or developing the skill sets to manage much more of one’s own technology estate or stack. Or to do the user research needed to deliver more successful products and services. All of which are ultimately the ‘means’, so here are a few examples of ‘to what end’ that might lie behind these goals. 

  • The cycle time of learning something, getting to more certainty about something, and potentially of delivering something that works well enough. This is related to ‘shipping’ times, but with more emphasis on learning and doing ‘just enough’ to learn more. 

  • Consideration for the size or duration of the ‘bets’ we’re taking. For example, do we work for many months using our assumptions, before something happens that proves or disproves whether we were right? How can we break it down to increments that are a fragment of the current time, to improve our ability to learn and iterate?

  • How well do we stick to an agreed direction and principles? (And do we have sensible ones in the first place?). For example, when we work on important data changes, what proportion of that work results in greater access through decent APIs, or of better protection and storage, or of moving towards standardising formats?

  • What improvement could be made to the predictability and control of investment? For example, is there anything to indicate that we’re reducing our rate of spend on certainty that is false, increasing it on optimisation where there’s a good evidence base, or tackling any tendency to overspend?

  • If we’re improving ways of working, presumably team health should improve too, as well as how it feels to work here generally. For individuals, is there a better way of getting at what’s important in terms of transformational efforts..? For example, tracking the self-reported proportion of people who feel they’re able to be productive, are able to use their professional skills and experience well, and enough to make a difference. 

  • Headcount ratios of modern professional disciplines represented (or missing), such as developers to user researchers and designers, or enterprise architects and business analysts to product managers and service designers. Not to set targets, but if there are obvious discrepancies and imbalances in disciplines, tracking change over time could make sense.

There are many other indicators of momentum and progress in terms of digital capability. For example the GOV.UK performance platform tracked the percentage of transactions that were digital across major UK public services.  

Risk reduction

Some things won’t have a direct relationship to effectiveness or efficiency, but they’re just as important,if not more so, because they’re about safety, assurance, ethics, climate, health, environment or society. So the ‘to what end’ of these themes might take the form of: 

  • how well we adhere to the standards that exist or that we set. Let’s say as a principle that we now ‘do X rather than Y’ because X is better. But for various reasons we’ve typically done Y in the past. So we could track how much X we do compared to Y over time, as a reduction of X or as an improving proportion of X to Y.  Say, moving away from storing data on bits of paper, towards spreadsheets, and moving away from spreadsheets owned and edited individually to storing data digitally with appropriate permissions, access, security, redundancy and backups

  • the proportion of time we follow specific security practices, or where we actively anticipate and monitor the security issues for the things that we develop. The lagging indicators could be the number of incidents or negative issues, the size of the impact, the duration.   

What about finance?

I haven’t called out ‘finance’ as a category here, despite revenue, funding or cost being top of most transformation agendas. Because money is a lag indicator (as many people said before me), ‘cut cost’ or ‘grow revenue’ doesn’t help with the ‘where and how and what to avoid and what to protect while doing that’. 

Finance teams could look at the four points detailed above and consider which drivers result in better use of money or in growth across their particular organisation. As Mike Bracken writes to manage operating costs while still supporting delivery, finance teams might sensibly look at reducing the marginal cost of changing or introducing a new part to a product or service. The same principle could apply to many other corporate teams (procurement, HR, commercial) in a position to either accelerate, or to apply the brakes, to delivery and change. 

Say what we mean

What’s here is by no means exhaustive, it's working in the open, but so far it’s been useful, it’s simple and it can work as a good rule of thumb to make the ambition for any new piece of work (or entire programme) suddenly much simpler. Many folk across organisations think of ‘digital’ as ‘other’ to them, or simply aren’t sure what it means, understandably. Others apply existing mental models and assume it all basically means lean six sigma or business architecture or something else they know, by another name. Far better if we all start to say what we’re actually doing, in plain english - and to what end. 

With thanks to Matti Keltanen for the suggestions of practical indicators. And to Ines Mergel, Noella Edelmann and Nathalie Haug for sharing their research. And to the folks at Public Digital for sharing their expertise.

Previous
Previous

How execs can support teams and change institutions

Next
Next

Service outcomes and measurement