How to choose a cloud serverless platform

Running a server farm in the cloud at full capacity 24/7 can be awfully expensive. What if you could turn off most of the capacity when it isn’t needed? Taking this idea to its logical conclusion, what if you could bring up your servers on-demand when they are needed, and only provide enough capacity to handle the load?

Enter serverless computing. Serverless computing is an execution model for the cloud in which a cloud provider dynamically allocates—and then charges the user for—only the compute resources and storage needed to execute a particular piece of code. 

In other words, serverless computing is on-demand, pay-as-you-go, back-end computing. When a request comes in to a serverless endpoint, the back end either reuses an existing “hot” endpoint that already contains the correct code, or allocates and customizes a resource from a pool, or instantiates and customizes a new endpoint. The infrastructure will typically run as many instances as needed to handle the incoming requests, and release any idle instances after a cooling-off period.

“Serverless” is of course a misnomer. The model does use servers, although the user doesn’t have to manage them. The container or other resource that runs the serverless code is typically running in a cloud, but may also run at an edge point of presence.

Function as a service (FaaS) describes many serverless architectures. In FaaS, the user writes the code for a function, and the infrastructure takes care of providing the runtime environment, loading and running the code, and controlling the runtime lifecycle. A FaaS module can integrate with webhooks, HTTP requests, streams, storage buckets, databases, and other building blocks to create a serverless application.

Serverless computing pitfalls

As attractive as serverless computing can be—and people have cut their cloud spending by over 60% by switching to serverless—there are several potential issues you may have to address. The most common problem is cold starts.

Source link