It’s not just about Lambda — it’s about understanding constraints too
But so often the discussion focusses on…
Lambda vs Containers or
Lambda vs EC2 Instances or
Lambda vs “How I‘ve done it for years“
I’m very happy to have these discussions and happy to debate the topics involved, but the problem I see is that a conversation like this can miss the point by quite a long way.
Stepping Back from the Code
The CTO in me is often stepping away from the code to look at the “Bigger Picture” on a lot of technology choices. That bigger picture with Lambda is often around maintenance of code and scaling at a very granular level. That in and of itself is really powerful.
But it’s not just about the technology choice.
It’s also about the practices around Continuous Integration and Continuous Delivery or CI/CD and the management processes (e.g. agile) that go along with the building of the technology and the hiring and the team etc. I saw this thread yesterday and it got me thinking about all this.
One point to pull out of that thread: CI is not something your DevOps team does for you.
Why is this important? Because CI is a process that all in a team are supposed to do, not something that is done to you.
It’s the same with agile. There was never meant to be a fixed way of doing agile. It was meant to be a set of ideas that you adapted to your scenario.
One of the biggest mistakes I’ve seen in companies is choosing the technology without having a good understanding of the underlying processes and their impact on all areas for those people reporting into a CTO/Senior Manager.
It comes back to a point I make often. Total Cost of Ownership is not just about technology choice.
It’s entirely possible to build really great code with a development team of any size in c or perl or go or python or Java or node.js… etc if you have good processes in place.
Now it’s completely true that a number of these languages are more utilised than others and that is often down to a number of things including popularity of the language, number of developers on the market, availability of good documentation and support, perception of the language within the business domain (e.g. Java and banking) and a whole host of other things.
So a good CTO will be making a technology decision based upon their mix of these elements and more.
But a CTO will also be interested in how the technology fits into the business in other ways. Some businesses will require serious precision in the technology (e.g. satellite manufacturers) and others can get away with much more lax coding solutions (e.g. web developers).
Sometimes your solutions will have constraints that mean you will need Serverless Applications, containers, distributed devices or instances. Your constraints are different to everyone else’s and that means you should be defining your solutions accordingly.
Constraints are important
When making a business decision you always have constraints.
These aren’t bad things, but good things.
Unfortunately the same is true of technology choices.
Your choice will always bring in constraints.
AWS Lambda has a 5 minute limit on processing.
It’s a constraint. It’s neither good nor bad in and of itself.
And if you build your solution without taking it into account, you can find yourself in difficulties for example when a process lasts longer than 5 minutes.
However, if you build your solution with that in mind from the beginning, you can easily avoid the problem. For example, you can setup Step Functions that spin up a container, pushes the processing into the container and sets up a queue with a triggered Lambda function to receive the processing from the container, and possibly destroy the container.
“It’s how we’ve always done it”
One of the frustrations I often have when discussing Serverless Applications is that people either want to do it the same way as before, or simply point to the constraints as blockers.
Just because it’s a different set of constraints than you are used to doesn’t mean it’s a bad choice of technology.
The twitter thread above discusses how over the past few years we’ve largely got used to using a feature branching model with version control systems. We’re so used to it, that it’s pretty much seen as a “best practice”. But the thread argues that feature branching goes against the original ideas and constraints of Continuous Integration and is actually a possible reason for a disconnect between the original idea of CI and current practice.
And I have to say that I tend to agree that for a large proportion of the organisations and setups I’ve seen, that feature branching is relatively unnecessary (not for all scenarios, but many). For open source projects it is different as the thread suggest, but it’s different by the nature of the team not the project itself.
But the discussion does open up the wider point:
What have we personally “learned” in the tech community as “best practices” that are actually unhelpful and/or just plain wrong?
Maybe they were “best practices” at the time they were developed because of the availability of technologies that have now been superseded?
An example might be that we’re so used to the ability to write long running tasks (“regularly lasting longer than 5 minutes” tasks) that we have forgotten about some of the technologies that can both help us to optimise, and simplify these tasks.
I would argue that Serverless Applications are a much bigger leap than the Server to Container leap that a lot of people (and startups) are making, and that is causing problems.
Maybe that is because the constraints are seen as overly restrictive or unnecessary?
Or maybe it’s just that Serverless Applications are challenging our own personal bias and “best practices” to such an extent that we dismiss it without consideration?
So why is this important?
Simply put, the decision as to whether to go Serverless or to go Containers or instances or whatever is often a decision that is made far too early in the process.
Maybe the first step in making a technology choice is always understanding your business domain and the constraints you can work with.
Because there are very few scenarios that I can think of where Serverless Applications don’t actually fit.
And choosing a technology because it’s what you’ve used before, or it’s what your technology team is “used to” is actually relatively lazy decision making.
And that’s very important.
I currently work for AWS as a Senior Developer Advocate for Serverless based in the UK and working in EMEA.
Opinions expressed in this blog are mine and may or may not reflect the opinions of my employer.