What *is* Serverless Architecture?
Update: I’ve written a follow up blog to this post here:
A simple definition of “Serverless”
tl;dr: A Serverless solution is one that costs you nothing to run if nobody is using it (excluding data storage)
Over the past few months I’ve been working on a startup that is fully serverless in architecture and is genuinely a joy to work on.
However, when I speak to people in the tech industry, there is a question about what “serverless” means.
Does it mean that there are no servers at all?
Or does it mean there are servers, but you don’t deal with them?
Or does it mean something else?
Here’s my take on it…
Serverless is about no maintenance
You can’t completely remove servers from the mix entirely, unless you just don’t need to store data of any sort. What you can remove is the need to worry about whether they are working, or if they are up, or all of those things.
It’s about utilising other people’s services. Things like AWS Lambda, Auth0, Parse (although not any more!) and many other third parties. You don’t have to run servers any more.
It’s about standing on the shoulders of the tech giants (and cool startups).
Because if you don’t have to care about servers, you only have to care about how you use the services.
Serverless is not about a specific technology
There are some people shouting about specific technologies, and even git usernames and repositories calling themselves serverless.
Serverless is more about the idea of removing the need to worry about the server from the mix of things that is needed to do. It will help if you know about server technology (because sometimes you do), but it’s not a necessity.
Serverless is about (micro) functionality
When you have an idea, it’s usually something like:
“I want it to do this”
and you don’t usually say:
“And I want it to be in this data centre, on these machines, with this spec”
It’s redundant. It mostly doesn’t matter (unless it really does matter, in which case, the likelihood of serverless being useful is almost nil).
You might talk about version of linux or framework such as django or node.js and express. However, serverless goes beyond the framework as well.
The framework also becomes more redundant *
Serverless becomes about exposing individual functionality rather than a whole server.
Serverless is about saying exactly what needs doing when responding to an event, and increasingly ignoring what underlying technology is required.
But it’s also about removing the need to manage uptime, server maintenance, upgrades, security vulnerabilities etc. The only bit you need to be aware of is your code. That’s it.
Serverless is simplicity, but not necessarily simpler
I’ve been running a serverless startup for a while now, and while it’s brilliant to not have to worry about the service running (I don’t) and worrying about whether or not the system is doing what it’s supposed to (I don’t)…
It most definitely is not simpler to build (yet)
Because all of a sudden, you realise all those things that you relied on in the frameworks.
The session management
The open source middleware
and a whole bunch of other things.
It’s not that you can’t pull those things in. It’s just that you realise that a lot of the functions that you previously built have relied on a bunch of other things that were in the framework and were not needed most of the time.
So you realise that you either have to “rebuild your framework” which is utterly pointless because it just makes each serverless function much bigger than necessary by pulling in more redundant code, or…
Rethink your code
Serverless really requires you to remove redundancy out of each function. Otherwise, you might as well just chuck your code into something like heroku or Elastic Beanstalk and be done with it.
You have to barebones everything (where you can), whilst recognising the points of contact (such as a shared database) between functions. You also have to realise when it’s just better to code in the redundancy rather than “roll your own”.
No set way of doing it
There is no “one” way of doing serverless.
The primary route that I see at present is the AWS Lambda route + other AWS services. This can include an API Gateway front end (but doesn’t have to — ours doesn’t utilise this very much at all).
The AWS Lambda approach is exciting because if something goes wrong, I change one little thing in one function, or create a new function entirely and re-route rather than (for example) run tests on multiple functions that I haven’t changed to find out that this one is fine and then “deploy” to multiple servers or docker or whatever.
AWS Lambda works for me
And you can be certain that Microsoft, IBM, and all the other providers are looking at AWS Lambda and thinking how they could implement something similar (if they haven’t already done so).
You may find a different solution. Maybe a bunch of micro-docker containers linked to routing somewhere that really works.
Or maybe you’ll find that AWS Lambda is just really good (as I have) and gives you at least as good an ROI as your current system.
Oh and you can move micro-functions over to test very easily. Especially if you already have AWS solutions in place.
I mentioned parse earlier and that’s a case in point. If you utilise a third party service, the worst thing that can happen is turning that third party service off. A rewrite and a rethink are the least of the problems. It’s just a hassle.
You could jump on the bandwagon of another third party that “takes on the API” of the original, but again, they could disappear too.
So always be aware that if you use a third party, make sure it’s likely to be around for the lifetime of the project, or have at least an idea of how you would move away from them in future if needed. AWS is a pretty safe bet on this one, which is why we use it. But it’s not the only choice.
Serverless is awesome
I will never go back to building something in a non-serverless way, unless the tech requires it (e.g. very low latency systems).
It will surprise you how difficult it is too to get your head around some of the issues, but it will also surprise you how little code you actually need (we currently have less than 50k of code running our entire systems).
I’m pretty sure that the next step will be something like machine learning developing serverless services without the need for coders.