Understand Serverless's limitations (and unlock its benefits)

Scale
Lucas Merdinian
5 min read

The concept of Serverless applies to a broad range of cloud products which allow developers to no longer manage their infrastructure when executing services (using Functions or Containers or even to store data (in databases or Object Storage).

Serverless is an extension of Platform-as-a-Service that focuses on enabling developers to deploy their code, but in most cases Platform- as- a- Service still binds your application to a server which limits scalability and generates fixed costs. For example, Heroku’s Dynos - which does enable you to simply deploy and run your code but cannot “scale to zero” and does not autoscale by default.

High scalability, pay-as-you-go, flexibility, no server management… Serverless sounds like the ideal solution for launching any application using microservices and EDA (Event Driven Architecture). However, when it comes to truly setting up such applications, things are not so rosy. As D. Zimine puts it in his very relevant article on Serverless “there is no free lunch in a closed system: to gain a benefit something must be sacrificed”. Developers and the Serverless community have had to build new tools, processes and practices in order to get around providers’ limitations, whether it be handling latency (cold start), or easily deploying several functions at the same time.

The concept of Serverless is awesome, and is relevant for many use cases, yet it is not the silver bullet.

Let’s get real and analyze the main selling points of Serverless.

No server management

As the name suggests, Serverless means developers don’t need to manage servers, everything is handled by the cloud provider from system updates to access control, and scaling to server redundancy. Yes, there are actually servers behind serverless platforms, they are securely shared between service users, and cloud providers handle the infrastructure and security management (especially to prevent users from disturbing or worse attacking other users). At Scaleway, we use Kapsule, our managed Kubernetes service along with the open source solution Knative and our own solutions to enable server management. However, other technologies can be used, AWS, for example, uses Firecracker - a micro VM hypervisor (open-sourced in 2019).

The main issue with no server management is that users lose control over where and how their code is executed, and most importantly how and where their data is handled. In the O’Reilly serverless survey 2019, security concerns appear to be the most common obstacle to the adoption of Serverless. Scaleway’s Serverless products run in our data centers in France, the Netherlands or Poland, your applications are deployed in isolated environments, and our system ensures they are always available.

Scalability

The recent years have seen the emergence of efficient open source tools that enable infrastructure-based traffic or resource consumption to be scaled automatically. Examples include container orchestrators like Kubernetes or micro VMs like Firecracker. Building on these solutions, Serverless compute services are able to run code or containerized applications that adapt to traffic surges, and most of all,, thanks to mutualization and system optimization, the code used in these applications can scale from zero in less than half a second.

Scalability is neither unlimited nor immediate, server mutualization and containerization helps to hide this by enabling nodes capable of rapidly launching containers to meet demand to always be running. However, there are still servers behind Serverless, and they are bound to their physical ability to be stopped or launched. Moreover, pooling resources implies being able to deliver the same experience to all users which share the same nodes. This in turn implies the fixing of limitations and quotas that restrain the ability of users’ applications to scale indefinitely.

Flexibility

If we restrict Serverless to Functions-as-a-Service, by breaking each service down into Functions, Serverless pushes the concept of microservices to a point where each part of a system can be managed and scale independently. For example, if a website's billing or login systems receive many requests, it won’t impact the notification or user information systems, which are little used. Serverless can therefore be seen as a derivate of microservices based applications where each microservice can be stripped down to a set of functions.

Source : D. Taibi, J. Spillner and K. Wawruch, Serverless Computing-Where Are We Now, and Where Are We Heading?," in IEEE Software

However, this implies an increasingly complex architecture because each individual function or service is reduced to a single basic operation with the logic being supported by middleware or third-party services (message queue, workflow engine, etc.). This can make Serverless applications a can of worms. Nonetheless, some argue that these drawbacks is cultural, and that Serverless calls for a change in how applications are designed, from creating to consuming services, and from code to configuration.

Source: X. Lefèbvre, What a typical 100% Serverless Architecture looks like in AWS!

Easy management

Like many managed services, Serveless is plug and play: “drop your code or container, and we take care of the rest”. For Containers, Scaleway provides a predefined isolated environment to run the container that is much easier to manage than a Kubernetes cluster or Stardust Instances. For Functions, we have a specific runtime with some system libraries. To enhance the developer experience, we built an API, a GUI console and provided some tooling in order to make deployment easier. More can be done, of course, for example to make it possible to deploy from a GIT or combine services using only one configuration file (like some serverless.com services or even AWS’s SAM).Much like in every ecosystem, everything depends on if you want to stay on the path the cloud provider created, or if your use case has specific requirements. For example, functions are bound to a runtime which at Scaleway we choose to be as lightweight as possible to enable a faster start time, but some libraries which are required for analytical jobs to be run like Python’s Scipy or Pandas for example, are not included.

Cost efficiency

Thanks to economies of scale and mutualization, cloud providers offer the possibility to scale to zero, meaning that only their control plane is running while clients’ applications are on “standby” ready to handle incoming requests. Moreover, by optimizing their applications for Serverless, developers can build an application in which every service scales independently. his is the main reason why companies first switched to Serverless.

However, cost efficiency depends on your use case, Serverless is recommended for asynchronous tasks or ones which can easily be parallelized and have unpredictable loads. This makes Serverless very attractive for newly launched services, especially for smaller teams that don’t have the means to manage their own infrastructure and can afford serverless as an extra cost, but less relevant for more seasoned companies that can afford to invest in their own system (using Kubernetes for instance). Be careful, as Serverless products are billed on a “pay-as-you-go” model, meaning costs can become difficult to predict depending on your usage. It is therefore important to be able to monitor your costs or limit the ability of your functions to scale.

Conclusion

Beyond being a trend, Serverless is a real game changer for how applications are run, erasing the need to stress over infrastructure management, but it comes at a price - trusting a third party to do the work behind the scenes. That is our job, and we will keep expanding and improving our services to fit your needs. You can focus on creating innovative services, leveraging the benefits of Serverless, and using application design which is adapted to the constraints of Serverless.

Share on
Other articles about:

Recommended articles