The fact that the top 3 cloud technology vendors Microsoft, Amazon and Google now got a dedicated branding pages for the topic “Serverless” is telling something to all of us. It’s not a fade and it’s going to stay.
Often times all these new buzz words will confuse us a lot, the bottom line definition of Serverless Computing is “Serverless computing allows you to build and run applications and services without thinking about servers”.
Serverless computing is the abstraction of servers, infrastructure, and operating systems. When you build serverless apps you don’t need to provision and manage any servers, so you can take your mind off infrastructure concerns.
The term “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them. Serverless lets developers shift their focus from the server level to the task level.
5 Key Characteristics
In order to satisfy the requirements of being a Serverless technology, the following 5 chararcteristics need to be addressed.
1. No Server management: There is no need to provision or maintain any servers. There is no software or runtime to install, maintain, or administer.
2. Flexible Event-driven scaling: This is one of the important characteristics of Serverless, you shouldn’t worry about scaling your solution if demand arises (see the below Facebook example). Typically your solution will scale based on events, timer or incoming actions. Examples: Execute code every 1 second, Execute code when an HTTP web endpoint is called, Execute code when a new file is uploaded to a blob storage or some of the simple use cases.
3. Highly available: Serverless applications have built-in availability and fault tolerance. You don’t need to architect for these capabilities since the services running the application provide them by default.
4. No idle capacity: You don’t have to pay for idle capacity. If your code is not running, you shouldn’t pay for it.
5. Micro Billing: When your code is executed you pay per execution. Typically the vendors calculate this based on memory consumption and time it takes for execution. ex: if your code requires 200mb of RAM and it takes 3 seconds to complete, you will only need to pay for this resource.
Key Technologies by Cloud Vendors
When it comes to Serverless there are set of core technologies and supporting technologies. The core technologies fall under the pure Serverless model and satisfy the 5 key characteristics highlighted above. However, the core technologies alone will not be able to support all scenarios, they typically depend on some supporting technologies like storage, message queuing, database, API gateway etc.
Core Technologies for Serverless
When it comes to core Serverless technologies it need to satisfy these 3 characteristics
- A scalable platform to execute a piece of code
- A scalable workflow solution for stitching together discrete code executions.
- A scalable pub/sub event routing engine
Here are the core Serverless technologies from each vendors
Azure Logic Apps
Azure Event Grid
Amazon Step Functions
Supporting Technologies for Serverless
The supporting technologies for Serverless will fall under the PaaS (platform as a service) category and they will not fully satisfy all the 5 key characteristics. The main factor is they will have base platform charges. Example: If you are using Azure API Management, then there is a base cost of $100/month (ex) to provide that service.
Here are supporting technologies from each vendor
We highlighted only the top 3 cloud vendors here since they are kind of leading the way with both cores as well as supporting technologies. IBM Bluemix (Openwisk) is currently lagging behind in this space, in addition, there are various other vendors like Iron.io competing in this area.
In today’s world, there are various use cases for Serverless technologies. Let’s take a simple example, imagine you are the CTO of Facebook, one of the key capabilities of Facebook is allowing users to upload their photo’s and video’s and share it with their friends. Facebook itself is a massive platform, running the platform and scaling it for 2 billion + users will require a huge infrastructure (including many servers). However, this particular functionality can be implemented seamlessly using Serverless model ignoring the entire complexity of the Facebook application.
The flow will look like this
You can implement the functionality as Amazon Lambda or Azure Function or Google Function, which automatically exposes you an HTTP endpoint, from the front-end web or mobile application you simply upload the content via that endpoint. The data get stored in the relevant backend. This is a simplified example, ideally, you’ll add an API gateway and security to the HTTP endpoints, which can also be achieved seamlessly by using either Azure API Management or Amazon API Gateway, security can be provided by Azure AD B2C or a third party like Auth0.
The important point to note here is, this particular functionality needs to scale for 2 billion+ users uploading millions of photos and videos every minute. If Serverless technologies do not exist, this particular capability will take months to implement with huge upfront cost, whereas with Serverless it will be few weeks effort with near zero upfront cost.
What is the difference between PaaS and Serverless?
This is one of the common questions we come across when discussing Serverless. In simple terms, you can see Serverless as the evolution from PaaS. As briefly mentioned before, for a technology to be classified as pure Serverless it needs to satisfy the 5 key characteristics. When it comes to PaaS, it will fail mainly on the “No idle capacity” and “Micro Billing”. Azure Cosmos DB and Amazon Dynamo DB are good examples, you need to provision those services and accept to pay some base cost. You can utilize capabilities like auto-scaling to grow the platform as and when the need arises but it’s either a manual or automated task given to the consumer, the platform will not look after itself.
What are the challenges?
Manageability is one of the key challenges in going down the Serverless route. If you are building a single monolithic application, it’s easy to manage and maintain. You will have a matured DevOps practices to run the application, matured CI/CD practices to take the code from development to production, whereas if you have 100’s of small Serverless discrete pieces of functionality spread across all over the place, the managing and operating that solution becomes extremely complex. I have explained in this article typical Challenges of Managing a distributed cloud application.
In the past 12-18 months, the cloud vendors have invested significantly in building a maturing the platform, however, when it comes to developer tools and management tools the maturity is still lagging.
It will be just a matter of time before either the cloud vendors or 3rd party vendors come up with some intelligent management and monitoring solution to address this gap.
Vendor Lock in, you need to be super careful which vendor you are going to choose, even though all of the top vendors have like for like functionality, it’s a one-way street. Once you implement and go live with your solution, it’s extremely hard to port your solution from one vendor to another vendor.