Over time computing has gone from mainframes to bare metal servers to on-premises virtualization to cloud server instances and containerization to serverless computing. What’s next, codeless computing? Probably not, but luckily we’re not talking about something as bizarre as that with serverless computing. The server element for executing code is essentially abstracted away from its developers, and it’s new enough that we’re in the Wild West.
Serverless Computing Explained
Serverless computing is a fancy way of saying that you don’t have to worry about the servers when you want to execute code—often referred to as a Function-as-a-Service (FaaS). Major cloud providers have compute capacity ready for anyone to reserve and run virtual machines (VMs) and containerization of microservices.
For public cloud providers, why not take it one step further and isolate running code on demand as a way to make more money? This is great for developers who need to continuously add services and features to their application stack but don’t want to fuss with managing the infrastructure.
Major cloud providers offer these serverless computing options with an emphasis on the payment model:
- Amazon Web Services (AWS)—AWS Lambda
Run code without thinking about servers. Pay only for the compute time you consume.”
- Microsoft Azure—Functions
Accelerate your development with an event-driven, serverless compute experience. Scale on demand and pay only for the resources you consume.
- Google Cloud Platform (GCP)—Google Cloud Functions
Event-driven serverless compute platform.
- IBM Cloud—IBM Cloud Functions (aka, OpenWhisk)
…executes functions in response to incoming events and costs nothing when not in use.
As great as these services are, though, we still have to contend with The Good, The Bad, & The Ugly
The good is the on-demand nature of this computing strategy at low cost. Suppose an application developer wants to give their aging application architecture a quick lift with a small feature that checks an Internet of Things (IoT) sensor in a smart home, like air quality to automatically suggest or order a new air filter. Instead of adding the compute power of infrastructure needed for many thousands of subscribers to the application, they can develop this on-demand function that only needs to run occasionally.
The bad is that these functions can get complicated and hard to manage, especially if they must run for more than five minutes at a time in an application process. They must also be accessed by a private API gateway and will require the dependencies from common libraries to be packaged into them. This can be terribly inefficient compared to containerization. The more complicated the coding required the less likely a serverless function is going to suit the application architecture well. For more information, see What is Serverless Architecture? What are its Pros and Cons?
The ugly is that there is currently no standardization of serverless computing across the different public providers. Vendor lock-in will be at risk when these enticing functions as code—with low prices—become addicting to some developers and enterprises. They cannot be easily ported around like in the same was as containers can.
As Rick Kilcoyne, VP Solutions Architecture at CloudBolt stated in a recent article:
“…tantalizing as serverless computing is, one must be fully aware that moving code between serverless platforms is extremely difficult and only made more so by cloud vendor specific libraries, paradigms, and IAM. Serverless computing is the technological equivalent of a snare trap as there’s virtually no way to easily migrate from one platform to another once committed.”
Serverless computing should definitely be a part of any enterprise hybrid cloud strategy. Just as a hybrid cloud application has a mix of public and private clouds, it can also have a mix of infrastructure technologies such as virtualization, containerization, and serverless computing with functions. Our CloudBolt hybrid cloud management platform helps you manage it all from one place.