Serverless: the culmination of 60 years of cloud innovation

Scale
Simon Shillaker
6 min read

Many put the beginning of cloud computing somewhere in the mid-nineties, when companies could first rent dedicated servers over the public internet. However, a lot of the foundations for cloud computing were being laid during the 1960s. Around this time, research institutions and universities first began experimenting with remote access to shared machines, and began the journey to the cloud as we know it.

From then, until now, the same three principles have driven innovation in the cloud: ease, scale, and cost. Today, these same three principles are embodied in the latest cutting-edge serverless technologies, which provide easy-to-use access to building cheap, highly-scalable distributed systems.

Time sharing, the original cloud computing

Even in the early days of networked computing, researchers were starting to think about how to share resources between multiple users. The general principle was called “time sharing”, where smaller terminals could launch applications remotely on a central mainframe.

John Backus first proposed the concept of time sharing in 1954, although computing and networking at the time were not well-developed enough to support its implementation. It was only in the 1960s - a decade of upheaval in computer science - that the first time sharing systems emerged. Two developments were critical to the emergence of these systems: large mainframe machines, and networking. Early mainframe computers were used primarily by research institutions, business and government organizations for data processing and scientific research. They were large, expensive, and complex to operate. Early networking was simple, point-to-point connections between machines; it was only in 1969 that we saw the arrival of ARPANET and wide-area networking.

Sharing a single machine with multiple users in the 1960s was a challenge. Each mainframe had a single operating system process, which supported concurrent execution of users’ workloads via a form of cooperative multitasking. This meant that applications would provide points where the operating system could switch to executing another application. By switching back and forth between applications, this achieved concurrent execution, but not parallelism.

The 60s and 70s saw several key innovations in operating system design that were essential to more effectively sharing resources between users, many of which still exist today. The Compatible Time-Sharing System (CTSS) was the first general purpose time-sharing operating system. CTSS could support batch processing and concurrent processing, and introduced the concept of an OS scheduler, running in a protected kernel process. Atlas, a supercomputer developed at the University of Manchester in the early 60s, ran the first operating system to introduce virtual memory and paging. Tenex was a time sharing operating system that combined these two ideas, along with a hierarchical file system, to produce something similar to modern operating systems. Tenex was an important influence on the development of UNIX and its derivatives, which power much of the cloud today.

In the 1970s, time-sharing systems became more widespread as the cost of computer hardware decreased and the demand for computing resources increased. During this decade, many commercial time-sharing systems were developed, aided by further advances in operating system design. During the 1980s, the rise of personal computers and local area networks (LANs) led to a decline in the popularity of time-sharing systems, and began the transition to modern cloud computing.

The three principles: ease, scale and cost

Although mainframe computers and time sharing are a thing of the past, their development was driven by the same principles driving modern cloud computing. Not only have these principles brought us to where we are now, but they continue to drive new cloud products.

Ease. In the 1960s, just as today, running your own hardware was difficult. Mainframes were huge, complex machines, requiring expertise and time to maintain. Although today’s servers are smaller, simpler and more robust, it’s still difficult to set up racks and switches, plug in cables, configure network topologies, and switch out failing drives. Today’s cloud saves users this hassle, just as time sharing did in the 60s.

Scale. Both time sharing and modern cloud computing are all about giving users access to a pool of resources larger than they would otherwise have access to. During the 60s this meant connecting a small client terminal to a single mainframe; today, it means connecting any networked device to gigantic data centers, with a functionally infinite pool of compute, storage and networking.

Cost. The “sharing” part of time sharing made it possible to access expensive mainframe machines without needing to buy one for every user. Exactly the same motivations drive the cloud today; by pooling resources, the cost of the expensive hardware is amortized across users. The result is that businesses today can run large distributed systems serving hundreds of thousands of customers, while paying a fraction of the cost of the underlying hardware and associated expertise.

From time sharing to the modern cloud

The transition from time sharing to serverless was enabled by several key changes in the world of computing: the rise of wide-area networking, increasingly cheap hardware, and the development of virtualisation. With these tools, we were able to move away from sharing single large machines within an organization, to sharing large pools of machines over the internet.

The first cloud providers emerged in the 90s, offering users access to bare metal machines. Each user had control of one or more physical machines for a given period of time. This was expensive, so users would run multiple applications on the same physical machine. Using single large machines to run multiple applications made it harder to do capacity planning, and upgrading a single host would risk breaking multiple applications.

Separating larger machines into smaller chunks was made possible by the arrival of modern virtualization. Although the ideas behind virtualisation extend back to the 1970s, hardware-supported virtualisation and modern hypervisors arrived in the early 2000s, ushering in the era of virtual-machine-based cloud computing. Providers could now securely subdivide large physical machines into smaller virtual machines, and users provision, scale and update a single-purpose VMs for each of their applications. As users’ workloads become more granular, providers could accordingly achieve higher utilization of their infrastructure by more tightly packing users onto underlying physical hosts.

Docker and Kubernetes both saw their first releases in 2013, starting the era of containerized cloud computing. Although ideas of operating-system virtualisation extend back to the late 90s with FreeBSD jails, it was only with Docker that this form of lightweight virtualization saw widespread adoption. Containers allowed users to break their applications into even smaller, shorter-lived chunks, leading to a more transient view of compute resources, captured by the famous Bill Baker quote of treating infrastructure like “cattle, not pets”.

The principles of the cloud taken to new extremes

Serverless first came into existence with the release of AWS Lambda in 2013. Although this makes it almost 10 years old, it’s still in its infancy compared to other cloud computing models. Since 2013, Serverless has become an overloaded and poorly-defined term. For some, it’s still just AWS Lambda and Functions-as-a-Service (FaaS), but for many others, it has grown to be a general philosophy for building easy-to-use, auto-scaling cloud products.

To explain what Serverless is, we can return to the same three principles of ease, scale and cost. Serverless is the most extreme embodiment of these three principles in cloud computing today, aiming to create systems that are incredibly easy to use, infinitely scalable, and truly pay-as-you-go. These ideas are not specific to a single product, instead we can have serverless computation, storage and networking.

First, Serverless aims to completely isolate users from configuring or provisioning infrastructure, thus taking the “ease” principle to new extremes. Users do not need to specify an operating system, instance type, networking rules, or storage. Instead, they just write their code, upload it, and the provider handles the rest.

Second, Serverless takes scaling to a new level, by scaling from zero to infinity by default. All serverless products have auto-scaling built in, scaling down to zero when you are not using them, and up to whatever scale you need, when you need it. This scaling happens quickly, and is fine-grained, scaling from zero to hundreds of cores and GBs of memory in seconds.

Finally, Serverless billing offers a never-before-seen level of granularity and cost efficiency. Serverless pay-as-you-go billing is as precise as its auto-scaling. Serverless users only pay for what they use, measured in milliseconds of runtime, megabytes of memory, and bytes of data sent and received.

The most popular serverless products today are Functions-as-a-service (FaaS) and Containers-as-a-service (CaaS). With FaaS and CaaS, users divide their application into a number of small, single-purpose functions or containers, which the provider scales and executes in response to incoming requests. Many object storage products are also serverless, but not always labeled as such. The majority of S3-based object storage solutions available today require minimal configuration, offer pay-as-you-go billing, and will automatically scale up from zero to infinity.

Conclusion

We’ve seen that, although serverless is the current cutting edge of cloud computing, it is built on the decades-old principles of ease, scale, and cost. It represents a culmination of innovation and development that has taken us from time sharing in the 1960s, to the highly scalable, functionally infinite pool of resources that is today’s serverless systems. Although serverless is in its infancy, there are already several compelling use-cases, and it will continue to get easier, more scalable, and cheaper to use.

Share on
Other articles about:

Recommended articles