Serverless computing is a very popular term these days. It is difficult to attend a conference without hearing about it. While the term is popular, that doesn’t mean there is no confusion about what it really means, the benefits, the drawbacks and how it can be used in different scenarios.
A definition of serverless that we will draw upon is from Martin Fowler :
”Serverless can also mean applications where server-side logic is still written by the application developer, but, unlike traditional architectures, it’s run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a third party.”
You might be asking yourself, this sounds a lot like Platform as a Service (PaaS) doesn’t it? Serverless does share some characteristics with PaaS, but there are some differences.
- Managed Service – In both architectures, there are dependencies on a third party for the underlying infrastructure and on-going maintenance of the service. This isn’t necessarily a bad thing as providing infrastructure to your business/client, may not be part of the value chain.
- Self-service tooling needs to exist, either via portal of API/SDK, which provides the ability to automate tasks like deployments and configuration. If you need to talk to a salesperson or log a support ticket to perform these tasks, then this service is not serverless.
- Scaling – There needs to be an aspect of scaling that it is inherent in the vendor’s offering. This is where the two architectures may begin to diverge. In a serverless architecture, the scale is something that should be achieved purely based upon consumption. PaaS may provide some level of auto-scaling, and hopefully auto de-scaling, but the level of granularity around these events tends to be very different.
- Event-driven – A serverless architecture is event-driven and runs within a stateless compute container. With PaaS architectures, the underlying infrastructure and application services are waiting to be called. But, in order to avoid disruptions, these services are active and are in a “waiting” state.
- Metered billing – In a serverless architecture, if there is no demand for a service, then there should be no associated billing. With PaaS, an underlying service is active and waiting to be utilized. While PaaS may be metered, usually per day or per month, it isn’t metered at the granularity that serverless is, which is usually at a transaction or micro-unit level.
Let’s take a look at a few scenarios where we can deploy serverless computing to address business needs.
In recent times, businesses get more competitive. Whether you’re a 100-year or 100-days-old business, there are people looking to “eat your lunch”. Now time to market has become more important than ever. Businesses are no longer interested in sinking large capital into initiatives that may take months/years to develop and even longer to receive payback.
Serverless technology has benefits in greenfield projects where a customer is adopting a new SaaS application. Perhaps a business has decided to buy a new sales automation or field service tool. These SaaS applications live on their own island, so you’re going to run into challenges when you plug your corporate master data into it. Subsequently, you’ll need to rationalize the data that the SaaS application generates with other System of Record.
Greenfield applications usually have some level of uncertainty i.e. How will the system be used? What will the volume of transactions/work orders/opportunities/leads that the system will generate? What will our costs be after implementation?
Azure Logic Apps will be a great serverless service to support these requirements. With Logic Apps, there is no infrastructure to provision. Consumption-based billing is used with more than 200 connectors available to provide this integration. Remember, the business bought the SaaS application because they believed there was some perceived value in it. The project isn’t about integration, but because of the impact that integration has, it becomes an important by-product. By using Logic Apps, organizations can extract even more value from its investment due to its ability to integrate with other systems.
As an IT organization or consultant, the priority is to return as much value to the business as fast and as reliable as possible. Without worrying about infrastructure, organizations can focus on implementing the solution that provides value which, allows for a faster time to market.
Inevitably there are times when a new requirement shows up that may not be easily addressed within an existing architecture. Trying to address the new requirement with an existing tool may lead to technical debt. The change may be perceived to be a “short-term fix” or may be required for a longer duration.
For example, the existing solution depends upon Cosmos DB as a data store. A new requirement has been introduced that requires a validation process to run whenever a new insert is made into Cosmos DB. Introducing an Azure Function is a great option here. In part due to the ability to set up a trigger that can be invoked when this event occurs. What is nice about this approach is layering the Azure Function on-top of the architecture and not buried deep within it which could provide agility challenges if future changes are required.
Once again, having the ability to quickly provision an Azure Function and letting Microsoft worry about the scale, while an organization only pays for the executions that take place is extremely compelling. This is highlighted on the Microsoft licensing page: “Azure Functions consumption plan is billed based on per-second resource consumption and executions. Consumption plan pricing includes a monthly free grant of 1 million requests and 400,000 GB-s of resource consumption per month.”
The legacy Brownfield applications may have provided a lot of value for a long time, but now it needs modernization. Expired platform support such as operating system, database or vendor, may be a driver for these replacements. Application replacements may also be the result of new emerging business models. This may include being more focused on customer service and leveraging new cognitive services. It may also be the result of changing the business model and moving from a product to service company.
For example, an elevator company’s existing business model includes selling elevators and subsequently charging yearly maintenance. However, it was determined that there are business opportunities by changing the business model. Instead of customers owning an elevator, they will now subscribe to the elevator service.
Previously, maintenance was a very manual activity. The company would routinely dispatch techs based upon manufacturing specifications. As a result, the maintenance application was a legacy web application that was dependent upon a technician updating records manually. However, the evolution of new technologies like Azure Event Grid, elevator events can be raised, evaluated, aggregated and analyzed in the cloud which creates efficiency opportunities. This allows pre-emptive maintenance and reduces the need for unnecessary service visits. But, when a visit is required, a rich dataset exists prior to the technician arriving on-site. Thus, the technician’s time is made efficient and the customer disruption is also reduced.
A key benefit of this approach is the elevator company did not have to provide a massive data center to support the explosion of events they were hoping to track. Instead, they are leaning on Microsoft to handle this. As more customer requires this new service, Microsoft provides the additional scale required. This keeps their costs low as they launch their service to validate their market opportunity.
Serverless Computing Advantages
With some background information and use cases outlined, lets review both the advantages and disadvantages of serverless computing.
Consumption-based billing is one of the most compelling aspects to serverless computing. It allows organizations to start small and as more consumption is required, the service will scale for you. No longer are organizations required to “build for peak” consumption. What this does is return capital to the business for other purposes, like growth.
For some organizations, they find consumption-billing to be a concern because it makes it difficult to budget for. While the concern is reasonable, it is important to look at this from a unit economics perspective. The idea being, if what you are providing provides value at a smaller scale, by doing more of it, you should continue to expect the same, if not more, value per unit. If this is not the case, then you probably want to take a closer look at the business proposition of your solution.
Not having to build for peak load, worrying about a key date on a calendar (like black Friday shopping) or weather events create opportunities for organizations that take advantage of serverless. In these cases, a scale is delegated to the cloud provider and that becomes part of the service they provide. This isn’t to say that you don’t have to monitor the performance of your applications, but there are significant advantages in not having to take care of this yourself.
Staying up to date for the operating system and database security patches is important. It can prevent data leakages or breaches. But, this activity can be unproductive for many organizations and doesn’t contribute to a company’s value chain. Taking systems offline doesn’t allow you to make more widgets. This is another activity that a cloud-vendor takes care of, which frees-up staff to focus on more value-add activities.
Innovation happens in the cloud first and this is not about to change. As marketplaces become more and more competitive, the velocity that cloud providers are shipping products and services is incredible. For organizations that want to take advantage of new pricing, deployment and service capabilities, they will create competitive advantages over their peers who don’t.
Serverless Computing Disadvantages
All of these new capabilities come at a cost. Here are some of the challenges that exist when adopting serverless computing.
With the amount of innovation being released each week, it is difficult to keep up with all of the opportunities. Often times, organizations are making decisions on technology that hasn’t been “battle-tested”. You also have to deal with organizational change management. How do you get your staff up-to-date with these technologies and how do you capture their sustained interest with the amount of change being introduced?
When you run a data center, you traditionally have a lot of control over those assets. You can monitor the SAN, the network, the converged computing appliance and performance counters. With this infrastructure being abstracted by the cloud provider, you naturally don’t have as much visibility or control over the provisioning of infrastructure. This isn’t necessarily a bad thing, but definitely creates some angst for organizations who are used to having this control.
Vendor lock-in may also be a concern. However, people tend to forget there is always some level of lock-in within solutions. Is it the programming language, an operating system, a vendor, a database or even a person that is critical to the success of a solution?
What is different with serverless is the entire business model has changed. Historically, vendor lock-in was a very big concern because switching costs were extremely high. This is why it is so difficult to migrate from a legacy ERP system. However, when you are paying based upon consumption and the time to market is considerably less than it has in the past, a cloud vendor needs to earn your business every day, not every contract renewal cycle. While vendor lock-in still exists and should be considered, it is important to evaluate it with a new set of parameters.
Serverless components tend to be deployed and priced separately. What can happen is that a bunch of disparate microservices that are very difficult to monitor. A business leader doesn’t care if component ‘xyz’ is down, they care if there is a business impact. With the speed of serverless components being delivered, a holistic tool to monitor all of these individual services as a single unit has been a gap, but this is a problem that Serverless360 is trying to solve.
Moving to cloud-based services likely shifts the way organizations fund these investments. Traditionally, large amounts of capital would be used to build solutions which could be depreciated and create tax benefits.
In a serverless model, costs are likely shifting to lower capital costs and higher operating costs. The net result is that cash should be freed up to the business, but it does change budgeting. For newer companies, with lower capital power, this is clearly an advantage, but for some larger, established companies this will likely create some friction in the finance department.
Within this article, we provided some background on what Serverless computing is and how it relates to PaaS offerings. We also provided some use cases that describe how you can use some of the serverless technologies from Microsoft. In addition, we discussed some of the advantages and disadvantages that serverless computing provides.
While the disadvantages of serverless computing should not be discounted, the advantages of serverless computing clearly overshadow the challenges. Ultimately, the technology exists to introduce efficiencies and opportunities for consumers and organizations. Taking advantage of serverless to get to market faster, in a pay-as-you-go consumption manner creates opportunities that organizations should not ignore.
Serverless360 is a one platform to operate, manage and monitor Azure Serverless components. It provides efficient tooling that is not and likely to be not available in Azure Portal. Try Serverless360 free for 30 days!