Is the Serverless Dream a Reality?

Serverless computing heralds the dawn of a new “post-container world”, if we’re to believe what we hear, with some organisations already proudly planning to go entirely serverless.

What is it?

In a nutshell, serverless computing is a new cloud computing execution model.

Benefits and Upsides

The potential upsides to going serverless are clear: The remaining VM & OS management issues are gone. Capacity planning also evaporates, as the cloud platform handles spinning up of resources to meet demand. Financially, we are no longer paying for unused resources.


As with the providers of any platform, going serverless imposes certain limits and constraints on our usage. These are usually defined in terms of the maximum code size, execution time, and memory footprint of our functions.

  • Long execution times — We must aim to break lengthy processing up into functions that each execute within the imposed time limits. But even if we can achieve this, we still need to confirm that the economics (see later) of generating and collating any intermediate results to achieve the same overall result stack up in our favour.
    If we simply can’t break lengthy computation into smaller pieces with shorter execution times, or if the economics aren’t favourable, we would need to run them as traditional services in containers, as part of a hybrid solution running concurrently alongside our serverless efforts.
  • Large memory footprints — Pre-caching a dataset or working with real-time data can, for some applications, dramatically reduce execution time… but it consumes a great deal of memory.
    Examples I can think of from my own experience are use of a large pre-computed model (such as AI / ML), real-time decisioning or serving adverts based on complex in-memory processing.
    If smaller-footprint alternatives can’t be found to retain the performance characteristics needed, the remaining footprint size may need to hosted as a service running in an always-on container / instance upon which our functions depend. Again, a hybrid approach.
  • Infrequent activities — Resources that are infrequently used are spun down in a serverless world, and incur the cost of restarting them when used (see performance below). It is possible to keep them “warm” with regular dummy triggers, but this needs to be taken into account in our architecture.


Although going serverless lets us forget about capacity planning, we need to keep a close eye on the actual performance we achieve through the resources that are allocated to us on-demand; particularly outliers due to startup delays or unexpected changes in behaviour.


At a high level, the potential savings are obvious, however the whole economic picture is more subtle and the bottom line really does depend on our specific architecture and usage pattern.

  • Repetition of resource startup (pre-warming/keep-alive is possible, see above).
  • Repeated pre-loading (in-memory) or accessing (if shifted elsewhere) of large datasets.
  • Additional communication due to breaking overall processing into smaller functions, e.g. use of a message bus to decouple functions, storage of intermediate results, etc.

The Serverless Dream?

So how realistic is going “entirely serverless”?

Building software to solve hard problems (Software Engineer / Lead / Manager) — Opinions are my own.