Is the Serverless Dream a Reality?

Ian Cackett
5 min readApr 4, 2018

--

Serverless computing heralds the dawn of a new “post-container world”, if we’re to believe what we hear, with some organisations already proudly planning to go entirely serverless.

The best news is that this seems entirely plausible for many, though the details and the general case are a little more complex, as ever.

For many of us, particularly at small startups, cloud computing solved the problem of owning and maintaining costly hardware. More recently, containerisation and container orchestration have simplified or removed the VM and OS management issues.

Serverless promises to do the same for one of the remaining cloud challenges, namely Capacity Planning: Ensuring that resources are available to meet expected demand.

Here’s how I’ve been thinking about it…

What is it?

In a nutshell, serverless computing is a new cloud computing execution model.

Rather than explicitly deploying code to servers, instances or containers that must be present in some capacity to meet forecasted demand, resources are instead spun up on our behalf, when triggered, to execute functions we provide.

For this reason, it is sometimes referred to as “FaaS” (Function as a Service).

What kinds of triggers can we use? With AWS Lambda, anything from files being uploaded to S3, CloudWatch events, incoming emails, HTTP & API endpoints. All of these, and no doubt many more to come, can be used to trigger functions, hence the widespread appeal.

Benefits and Upsides

The potential upsides to going serverless are clear: The remaining VM & OS management issues are gone. Capacity planning also evaporates, as the cloud platform handles spinning up of resources to meet demand. Financially, we are no longer paying for unused resources.

But is it all good news?

I’ve been thinking through some of the architectural, performance and economic factors I believe we should consider before making a move to serverless, to make sure we get the most out of it.

Architecture

As with the providers of any platform, going serverless imposes certain limits and constraints on our usage. These are usually defined in terms of the maximum code size, execution time, and memory footprint of our functions.

Such constraints either rule out or require rearchitecting of any of the following, where they are present in our application:

  • Large code bases — A move to serverless with a code size constraint may provide even more incentive than our current push towards microservices to invest in breaking large / monolithic code bases into smaller chunks with fewer dependencies. Each function needs to bring minimal baggage with it.
    So perhaps this particular limitation could work to our advantage, given some redevelopment effort.
  • Long execution times — We must aim to break lengthy processing up into functions that each execute within the imposed time limits. But even if we can achieve this, we still need to confirm that the economics (see later) of generating and collating any intermediate results to achieve the same overall result stack up in our favour.
    If we simply can’t break lengthy computation into smaller pieces with shorter execution times, or if the economics aren’t favourable, we would need to run them as traditional services in containers, as part of a hybrid solution running concurrently alongside our serverless efforts.
  • Large memory footprints — Pre-caching a dataset or working with real-time data can, for some applications, dramatically reduce execution time… but it consumes a great deal of memory.
    Examples I can think of from my own experience are use of a large pre-computed model (such as AI / ML), real-time decisioning or serving adverts based on complex in-memory processing.
    If smaller-footprint alternatives can’t be found to retain the performance characteristics needed, the remaining footprint size may need to hosted as a service running in an always-on container / instance upon which our functions depend. Again, a hybrid approach.
  • Infrequent activities — Resources that are infrequently used are spun down in a serverless world, and incur the cost of restarting them when used (see performance below). It is possible to keep them “warm” with regular dummy triggers, but this needs to be taken into account in our architecture.

Performance

Although going serverless lets us forget about capacity planning, we need to keep a close eye on the actual performance we achieve through the resources that are allocated to us on-demand; particularly outliers due to startup delays or unexpected changes in behaviour.

This means gathering metrics— if we aren’t already — and using them to make an initial comparison — existing system versus serverless prototype — but also during and after migration to pinpoint architectural aspects that we might modify further to reach a more performant serverless result.

As we have essentially handed over some of our burden and hence our control (hopefully to our benefit), we need to make sure we are getting the performance we expect in return.

Economics

At a high level, the potential savings are obvious, however the whole economic picture is more subtle and the bottom line really does depend on our specific architecture and usage pattern.

Economic factors include the cpu, bandwidth and storage costs of the following:

  • Savings from unused resources (also lower human cost to manage them).
  • Repetition of resource startup (pre-warming/keep-alive is possible, see above).
  • Repeated pre-loading (in-memory) or accessing (if shifted elsewhere) of large datasets.
  • Additional communication due to breaking overall processing into smaller functions, e.g. use of a message bus to decouple functions, storage of intermediate results, etc.

Plugging the above together for our specific application gives an idea of how serverless stacks up economically for us as a whole. This may indicate the expected benefits but, for certain architectures and usage profiles, we may find the conclusions less obvious.

The Serverless Dream?

So how realistic is going “entirely serverless”?

For some applications, it definitely seems realistic and a complete new way forward. If we iron out the architectural wrinkles and take a pragmatic approach to measuring the economic and performance benefits.

For others, it may work only partially, meaning a hybrid solution is best (serverless alongside reduced server / instance / container usage).

For some applications, the benefits may not yet stack up after consideration.

As with any advance in tech, particularly where there seem to be so many “obvious” upsides, what matters most is that we prove the benefits are actually there for us.

Serverless is definitely worth a much closer look.

--

--

Ian Cackett

Building software to solve hard problems (Software Engineer / Lead / Manager) — Opinions are my own. 🏳️‍🌈