Skip to main content

Serverless computing explained

Serverless is a relatively new program design that lets you build programs that seem as if the cloud were built for them. You don’t worry about infrastructure and management; you just write code.

The usual taxonomy of cloud computing has two main sections: infrastructure as a service (IaaS) and software as a service (SaaS). With IaaS, you get the virtual equivalent of a computer that you control much as you would a real computer over the network. With SaaS, you get an application, like Salesforce.com or Microsoft Office 365. There is also platform as a service (PaaS), which provides a cloud-based set of features, generally for developers to build cloud applications. PaaS, I would argue, is just a specialized form of SaaS.

But another model, called serverless computing, is different from and more virtual than all the others. A serverless application is just code running on a cloud, accessible through a URL. Supported by all the major cloud platforms, “serverless” has two meanings in the market today: functions as a service (FaaS) and back end as a service (BaaS).

FaaS is an event-triggered, stateless function. Importantly, the function runs in a container managed entirely by the cloud provider. BaaS refers to client-side applications, typically complex web pages, that make extensive use of third-party, cloud-hosted applications and services, such as authentication and database, to manage state and data. I mostly discuss FaaS in this article.

So what is available?

Here are the serverless offerings from the major cloud providers:

There are also open source serverless projects, such as Apache OpenWhisk (on which the IBM offering is based) and Oracle Fn.

Serverless architecture can be more cost-effective than renting whole servers, which inevitably have some idle time that you pay for. The price model for serverless computing is pay as you go, but it's metered much more precisely than whole servers and usually includes a large number of free transactions. On AWS Lambda, the oldest of these services and therefore the trendsetter, the first 1 million requests per month are free. Azure has a similar model with the same free, monthly grant.

Plus, since the cloud provider is responsible for scaling capacity to demand, the developer or systems architect does not need to build autoscaling groups or instrument servers based on demand. If your application needs to grow quickly from 10 requests to 1,000 requests per second, the provider does the scaling without any configuration or other work on your end. The provider also handles all resource provisioning and management, including CPU, memory, storage, and network capacity. You no longer need be concerned with whether you have enough memory or CPU to handle capacity.

Cloud development usually combines developers with architecture and operations staff, and serverless computing should diminish the need for the latter. You may get better developer productivity as a result, as developers will probably be freer to build and experiment with new serverless functions.

Unsure how to get started with containers? Yes, we have a guide for that. Get Containers for Dummies.

How are these services used?

You can write these functions in a variety of popular languages. You can write AWS Lambda functions in any Java VM language (Java, Scala, Clojure) or .NET language, as well as JavaScript, Go, or Python. The other providers also try to be as liberal as possible with language support. You deploy the code to the cloud provider, where it is triggered by an event, such as an HTTP request, a specified time, a message being added to a queue, or a file being posted to a folder.

How is a serverless architecture different in practice? Imagine a typical e-commerce site: On the client side, you have a web browser. On the server end, you have a web server, probably some application server and a database. The client is fairly dumb and deals directly with only the web server.

Reconfigured for serverless architecture, the client suddenly takes on a lot more responsibility. It makes calls directly to a serverless authentication service, queries a serverless product database, and makes calls through a serverless API gateway to search and make purchases. The client tracks all this state and displays it to the user. Note that this structure is much more amenable to the single-page application architecture you might build with Angular.JS or React.

But user-facing applications aren’t necessarily the typical case for serverless design. The serverless database front end in the example could fit into many different scenarios.

The architecture is much more flexible and changeable than conventional architectures. In theory, you can swap out any of the components with a compatible alternative without even touching the others.

Stateless vs. stateful

I have used the word "stateless" above, and it is important to serverless architecture. Like the containers on which they are based, serverless functions are designed to run and exit; you can allocate variables in one invocation of the function, but you have no guarantee that it will still be there on the next invocation. Therefore, if you need to maintain state, it needs to be done elsewhere, such as in a database or file system.

Cloud providers also limit the amount of time that serverless functions can execute (the limit on AWS is 5 minutes). This is another way that some applications are not well-suited to serverless design. In the e-commerce example, the serverless components take state from the client and pass it on to back-end applications, including the database.

There are downsides to serverless systems, which is another way of saying they are not appropriate for all applications. PureSec's "Ten Most Critical Security Risks in Serverless Architectures" document makes a lot of good points. The design of serverless systems looks very simple from the outside, but in practice, it can be quite complex, creating a larger and more complex attack surface. Conventional security approaches (firewalls, IDS/IPS, etc.) do not apply to serverless. Testing of the implementation can also be more complex and is, at the very least, different from what you are used to.

There is an issue of vendor lock-in with serverless design. It may be that you are using a language like Java that runs everywhere, but the implementation will be different enough from vendor to vendor that there will be at least some work involved in porting it.

Other downsides include performance challenges for infrequently used functions, as these will probably be “spun down” by the cloud provider. If this happens, and your entire (.NET, Java) runtime needs to load, you may well have a latency problem. Finally, since you do not control the systems on which serverless applications run, this means you are subject to all the problems attendant to multitenancy. If the management is not perfect, you may experience performance problems due to your neighbor’s application. Conceivably, data could leak between instances. You may not be able to audit the system to the level of your requirements.

Serverless and microservices sound the same in many ways but aren’t. Microservices are an expression of modularity in an application, but that application contains many functions. The service probably runs on a conventional VM, which requires conventional provisioning. A serverless approach would probably implement each operation as a separate function, deployed separately and using resources only when it runs.

Pros and cons

The differences between serverless and microservice architectures provide clues to the benefits and problems with each approach. If your application is in constant use, the overhead of a provisioned server may be well worth the cost. Knowing which is better is not easy, as it requires a good idea of the actual transactional demand for each function.

The radically different nature of serverless design means that it is likely a candidate only for new applications. A retrofit of an existing infrastructure-based design to a serverless one would almost certainly be the wrong way to go about things. If you are considering a serverless design, think through the full application and the likely future needs for it. It’s entirely possible that serverless will be a bad choice.

But when serverless works, FaaS in particular, the application seems to be truly cloud-native software: easy to build, supremely flexible, and efficient as possible. A real thing of beauty.

The usual taxonomy of cloud computing has two main sections: infrastructure as a service (IaaS) and software as a service (SaaS). With IaaS, you get the virtual equivalent of a computer that you control much as you would a real computer over the network. With SaaS, you get an application, like Salesforce.com or Microsoft Office 365. There is also platform as a service (PaaS), which provides a cloud-based set of features, generally for developers to build cloud applications. PaaS, I would argue, is just a specialized form of SaaS.

But another model, called serverless computing, is different from and more virtual than all the others. A serverless application is just code running on a cloud, accessible through a URL. Supported by all the major cloud platforms, “serverless” has two meanings in the market today: functions as a service (FaaS) and back end as a service (BaaS).

FaaS is an event-triggered, stateless function. Importantly, the function runs in a container managed entirely by the cloud provider. BaaS refers to client-side applications, typically complex web pages, that make extensive use of third-party, cloud-hosted applications and services, such as authentication and database, to manage state and data. I mostly discuss FaaS in this article.

So what is available?

Here are the serverless offerings from the major cloud providers:

There are also open source serverless projects, such as Apache OpenWhisk (on which the IBM offering is based) and Oracle Fn.

Serverless architecture can be more cost-effective than renting whole servers, which inevitably have some idle time that you pay for. The price model for serverless computing is pay as you go, but it's metered much more precisely than whole servers and usually includes a large number of free transactions. On AWS Lambda, the oldest of these services and therefore the trendsetter, the first 1 million requests per month are free. Azure has a similar model with the same free, monthly grant.

Plus, since the cloud provider is responsible for scaling capacity to demand, the developer or systems architect does not need to build autoscaling groups or instrument servers based on demand. If your application needs to grow quickly  from 10 requests to 1,000 requests per second, the provider does the scaling without any configuration or other work on your end. The provider also handles all resource provisioning and management, including CPU, memory, storage, and network capacity. You no longer need be concerned with whether you have enough memory or CPU to handle capacity.

Cloud development usually combines developers with architecture and operations staff, and serverless computing should diminish the need for the latter personnel. You may get better developer productivity as a result, as developers will probably be freer to build and experiment with new serverless functions.

How are these services used?

You can write these functions in a variety of popular languages. You can write AWS Lambda functions in any Java VM language (Java, Scala, Clojure) or .NET language, as well as JavaScript, Go, or Python. The other providers also try to be as liberal as possible with language support. You deploy the code to the cloud provider where it is triggered by an event, such as an HTTP request, a specified time, a message being added to a queue, or a file being posted to a folder.

How is a serverless architecture different in practice? Imagine a typical e-commerce site: On the client side, you have a web browser. On the server end, you have a web server, probably some application server and a database. The client is fairly dumb and deals directly with only the web server.

Reconfigured for serverless architecture, the client suddenly takes on a lot more responsibility. It makes calls directly to a serverless authentication service, queries a serverless product database, and makes calls through a serverless API gateway to search and make purchases. The client tracks all this state and displays it to the user. Note that this structure is much more amenable to the single-page application architecture you might build with Angular.JS or React.

But user-facing applications aren’t necessarily the typical case for serverless design. The serverless database front end in the example could fit into many different scenarios.

The architecture is much more flexible and changeable than conventional architectures. In theory, you can swap out any of the components with a compatible alternative without even touching the others.

Stateless vs. stateful

I have used the word stateless above, and it is important to serverless architecture. Like the containers on which they are based, serverless functions are designed to run and exit; you can allocate variables in one invocation of the function, but you have no guarantee that it will still be there on the next invocation. Therefore, if you need to maintain state, it needs to be done elsewhere, such as in a database or file system.

Cloud providers also limit the amount of time that serverless functions can execute (the limit on AWS is 5 minutes). This is another way that some applications are not well-suited to serverless design. In the e-commerce example, the serverless components take state from the client and pass it on to back-end applications, including the database.

There are downsides to serverless systems, which is another way of saying they are not appropriate for all applications. PureSec's "Ten Most Critical Secuity Risks in Serverless Architectures" makes a lot of good points. The design of serverless systems looks very simple from the outside, but in practice, they can be quite complex, creating a larger and more complex attack surface. Conventional security approaches (firewalls, IDS/IPS, etc.) do not apply to serverless. Testing of the implementation can also be more complex and is, at the very least, different from what you are used to.

There is an issue of vendor lock-in with serverless design. It may be that you are using a language like Java that runs everywhere, but the implementation will be different enough from vendor to vendor that there will be at least some work involved in porting it.

Other downsides include performance challenges for infrequently used functions, as these will probably be “spun down” by the cloud provider. If this happens, and your entire (.NET, Java) runtime needs to load, you may well have a latency problem. Finally, since you do not control the systems on which serverless applications run; this means you are subject to all the problems attendant to multitenancy. If the management is not perfect, you may experience performance problems due to your neighbor’s application. Conceivably, data could leak between instances. You may not be able to audit the system to the level of your requirements.

Serverless and microservices sound the same in many ways but aren’t. Microservices are an expression of modularity in an application, but that application contains many functions. The service probably runs on a conventional VM, which requires conventional provisioning. A serverless approach would probably implement each operation as a separate function, deployed separately and using resources only when it runs.

Pros and cons

The differences between serverless and microservice architectures provide clues to the benefits and problems with each approach. If your application is in constant use, the overhead of a provisioned server may be well worth the cost. Knowing which is better is not easy, as it requires a good idea of the actual transactional demand for each function.

The radically different nature of serverless design means that it is likely a candidate only for new applications. A retrofit of an existing infrastructure-based design to a serverless one would almost certainly be the wrong way to go about things. If you are considering a serverless design, think through the full application and the likely future needs for it. It’s entirely possible that serverless will be a bad choice.

But when serverless works, FaaS in particular, the application seems to be truly cloud-native software: easy to build, supremely flexible, and efficient as possible. A real thing of beauty.

Serverless computing: Lessons for leaders

  • A serverless solution may not be appropriate for all applications.
  • Operational costs must be considered as well as developmental ones.
  • Serverless can significantly enhance development and deployment flexibility.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.