Making Productivity Tangible
It's no small wonder customers, managers of software development, and even developers themselves, I guess, are so focused on code functionality and code efficiency (e.g. performance, scalability, usability, security). Both represent what's tangible to them. Both are a software's runtime qualities. They are what's valuable to users.
Functionality and efficiency are what customers are paying for. Or at least, that's what they think. Because in all truth they want more than that. Customers are not just satisfied when they receive a software fulfilling some runtime requirements; they also want a team who delivers those qualities quickly, reliably, and tirelessly.
Customers want software to help them to become more productive. And they want a permanently productive team to build this software over weeks, month, years, even decades.
Software development thus is about:
- code functionality,
- code efficiency, and
- team productivity
Whatever gets done in software development has to serve at least one of those requirements. If that's not the case, the customer won't be willing to pay for it.
Or to put it the other way around: Whatever gets done in software development is, what the customer is willing to pay for.
And what's the criteria for "willing to pay for"? Relevance (or value) and tangibility.
Too bad that team productivity - or to be precise: sustainable team productivity - does not fit all of these criteria. It's of course relevant and of value - but it's not tangible.
Team productivity is neither a software runtime property to be easily tested. Nor is it measured or made transparent in any way. Hence the customer is not really paying for it to be delivered in a sustainable fashion. Yes, that's what I see in most teams I meet for clean code development trainings.
(Agile teams are no exception to this observation, by the way. Even if they track velocity that's not a measurement of productivity. Velocity is a purely made up value. If you want to progress at a certain velocity then simply assign the increments enough story points.)
The more I think about this, the stranger it is to me: Software development is one of the most important industries and there is no systematic measurement of its performance? What?!😳
Or maybe I'm missing something here?
No, I don't think so. The simple testimony to this is the growing amount of legacy code, the ever deeper brownfield, the increasing big ball of mud.
Smelling code, dirty code is a result of a lack of attention to team productivity. Technical debt is a term coined for code which has been optimized for short term productivity, not long term productivity. And technical debt abounds - whether its creators know it or not.
To me this all inevitably leads to one conclusion: Software development is not measuring its long term productivity. It does not make it visible, it does not make it tangible – hence the development process is not balanced. There is no optimization of sustainable productivity to counter the optimization for the immediate production of functionality and efficiency.
Sooner or later the effects of this become obvious, though: conflict, pressure, stress. That way the lack of sustainable productivity finally becomes tangible to the customer (or manager): delivery of new features slows down noticeably, and the "WTF!" exclamations from developers increase in frequency. Software production slips deeper and deeper into unreliability.
What's not measured, what's not tracked, what's not made tangible is below the radar of any budget sensitive manager. It's that simple. Any complaint about the lack of clean code thus is a complaint about a lack of some metric.
Metrics for productivity
But what's a reasonable metric for clean code? It's nothing static code analysis can provide. Forget about cyclomatic complexity, fan-out vs fan-in, lines of code per class etc. A customer is not and will never be interested in any of this. Only if some value of cyclomatic complexity etc. actually make a difference to her, will the customer pay for it.
What makes a difference, though, is the presence of a feature (functionality + efficiency) - and a certain level of productivity. Code does not need to be clean; cleanliness is not tangible for customers. It rather needs to be in a shape to keep productivity at a high level.
And how do you check if code is in such a shape? By watching the cyclomatic complexity and the like? No! That's secondary, that's indirect at best.
Instead primary values need to be watched. And what are primary values? Something that actually makes a difference to the customer with regard to team productivity.
What could that be?
Here are a couple of suggestions:
- Cycle time: Track how long it takes to produce each increment. (I'm using "increment" as a neutral term here to denote something of value to the customer's users. That can be a feature or a bug fix.) Yes, just track and plot - and then watch how the numbers change over time. Smaller numbers are better; a downward trending curve is better.
- Throughput: How many increments are finished ("moved to Done") per period (e.g. per week). Track and plot - and watch how the numbers change over time. Larger numbers are better; an upward trending curve is better.
- WIP: How many increments are worked on at the same time? Track and plot - and watch how the numbers change over time. Smaller numbers are better; a downward trending curve is better.
These values could be categorized at least in a pretty broad brush way:
- Kind: Track the above values separately for features and bug fixes. That way it becomes clear where effort was invested and how the effects of this investment might differ between both increment categories.
- Complexity: Increments might be roughly judged to require a certain amount of effort. Without falling into "the estimation trap" I think it's ok to label increments as small, medium, or large (t-shirt sizes) before starting to work on them. From this no estimation of actual effort should be derived, though. It's just a recording of the gut feeling of team members to later be used in analysing tracked values and using them to calculate fact-based forecasts.
In addition a Cumulative Flow Diagram (CFD) could be plotted to make it easier to detect counter productive process patterns. See for example this book by Daniel Vacanti for a comprehensive discussion of CFDs.
Nothing less than a public productivity dashboard is what I envision. And that's very different from anything showing story points. Because the dashboard is based on facts, nothing but facts. It's depicting the evolution of unemotional counts.
This dashboard should be prominently placed to be visible for the team, managers - and customers. Yes, customers! It should be part of the deliverables. It should be included in every release as a documentation of the evolution of one of the three basic requirements: team productivity.
And that should be a comparatively easy feat, I'd say. Because all numbers can be calculated from what's tracked anyway: issues. Any decent issue tracking tool lets you move issues through stages like ready, in process, done. And any decent issue tracking tool lets you export issues with timestamps documenting when they were moved to these stages.
Nothing more is necessary than a list of issues representing increments with at least two timestamps: when did it enter the in progress stage and when did it leave it this stage and enter the done stage?
Consider the categorizations to be only bonus attributes. If there's nothing more than timestamps then that's fine (for a start). Better to start visualizing than to continue shooting yourself in the foot in the dark.
I think we owe it to the reputation of our industry to step up the game. We need to become more professional, we need to become more transparent. The customer deserves to get at her fingertips not only functional and efficient software, but also a team whose productivity is not a matter of prayer and gut feeling, but of a systematic process and deliberate improvement based on hard facts.
And this is in no opposition to the notion of software craftsmanship. Right to the contrary. I deem it at the heart of it because such transparency is a matter of professionalism as is expected from any mature industry nowadays.
Image source: pixabay