Feature velocity and Serverless
Last week at JeffConf during one of the panels, one of the panelists (I forget who) said that companies aren’t so interested in saving money on their hosting costs with cloud, as it is relatively low cost already. What companies are interested in is improving their feature velocity.
This point hit me quite hard. One of the things that I go on about with Serverless is the Total Cost of Ownership (TCO) that goes along with it. Reducing maintenance, decreasing bug fixes and problems seems to go along with Serverless very well, but I hadn’t thought about Feature Velocity(FV) very much.
So I have been thinking about it.
What is feature velocity?
Very simply it’s how fast new features can be added to a product.
It’s a way of measuring whether a tech team is delivering additional features, and it’s easy to understand…
…but not that easy to measure.
It’s sometimes quite hard to define it in the context of a tech team, simply because features are not a fixed size/timeframe and as such you will end up with different features taking different lengths of time.
You could maybe break each feature into a known delivery unit called “unit of work” (which many people do), but prior to starting a feature, this is simply guesswork (educated guesswork for an experienced team). It’s also not particularly easy to do, because there is bias in there. If a team wants to be seen as having a high feature velocity then they can easily add in extra “units” of effort at the start (over-estimate) and deliver quickly and voila: high feature velocity.
Units of work can be things like days of effort (a common one) or story points, and a lot of agile approaches use these as the basis for working out whether a team is delivering it’s work fast enough.
It’s a broad and relatively helpful concept for a team to know whether they are ahead or behind. It is also a useful concept for those outside the tech team to use as a way of measuring whether a team is performing (or not!)
The velocity snapshot
The biggest problem I have with the idea of feature velocity as a measure of how well a tech team is doing, is that it’s a snapshot tool. At any given point during a well managed team you can see whether a team is ahead or behind (so often it’s behind… but that’s another conversation about estimation) by looking at the feature velocity.
This is absolutely fine if you take into account one thing:
Feature Velocity is great when you are simply developing new features
The problem is that unless you are quite a large business, you are definitely not just developing new features.
In a small tech team, you are likely to have multiple areas of focus, including maintaining infrastructure, fixing bugs, architecture debt, technical debt etc.
And you will also have DevOps work around it, and CI/CD pipelines to manage and code repositories etc.
In other words, the work of a tech team is never just developing new features. Over time you do a lot of other tasks that can feed into your work and therefore affect your feature velocity.
So what has this got to do with Serverless?
It’s quite simple really. Feature Velocity needs to be taken over a period, and not as a snapshot. In other words, it needs to be taken in the context of the whole rather than in isolation.
When you take Feature Velocity over a period, the senior leadership will have a different set of questions.
If you build a piece of software, and release it, and that generates a number of bugs, then you have reduced your future feature velocity.
If you build on top of a specific stack and then find that the stack needs replacing, you have reduced your future feature velocity again.
If you build a piece of software at a slower velocity, you may find that your feature velocity is increased.
All of these are hypotheticals of course, but when you add in a Serverless solution into the mix, what changes?
So, one of the main things I always talk about is Total Cost of Ownership with Serverless. The fact is that if you build the right solutions, you should end up with a much smaller DevOps burden in terms of time taken to manage and maintain your entire solution.
One of the advantages of Serverless is that you can replace elements of a solution “in place” a lot more easily as well. Replacing a Lambda function for another one often has minimal impact on the whole solution. This means that technical debt and bug fixing can actually turn out to have a higher feature velocity.
Not to mention that if you tend to use fewer lines of code (and fewer libraries) as I would advocate, you end up with less code to maintain, meaning that you (to a point) reduce the ability for bugs to occur, reducing the amount of bug fixes and technical debt you accrue.
And add to all of this, that you are reliant on your cloud provider for the infrastructure, and you are minimising your architectural debt as well. You don’t have to think about provisioning new RDBMS servers or increasing the number of instances in an autoscaling scenario, and you also limit your choices in terms of which technologies you can use (which is often a very good thing).
Feature Velocity is not necessarily everything
The more I look at this issue, the more I see a significant increase in Feature Velocity over a whole programme timescale, when you use a Serverless approach.
I do think Feature Velocity is worthwhile as a snapshot of “where we are right now”, but I do think there are a lot of metrics, such as this one, that are short term tools and overused from a senior position as to whether we are moving “fast enough”.
Working out how well a tech team is doing and whether they are working efficiently and well, is a very hard thing to gauge. Often a senior leader will look for some simple metric to give them a clue as to whether the team is working or not. I would argue that we need to think more long term with that, and I would argue that Serverless has a strong part to play in improving team’s efficiency over the medium to long term.
So, if someone is saying you’re not moving fast enough, just consider whether they are looking at a snapshot, or whether they are looking at the wider context.
My take is that Serverless gives you the opportunity of greater Feature Velocity, but only over time.
UPDATE: There’s a follow up post to this one that takes this on a little further and may provide a little more context https://medium.com/@PaulDJohnston/feature-acceleration-and-serverless-a307424bf9e5