Breaking down #Serverless

Florian | Nov 8, 2019

As serverless technologies gain more and more momentum you may be wondering what does serverless exactly mean - because you know you can’t run a compute workload without computers or servers. And how does it reflect on the different technologies like Docker, Kubernetes and Function-as-a-Service (FaaS)?


So let’s start with why it is called serverless when in the end there are still server required for running your stuff?

Basically, it is very simple: It’s the perspective!

In a classical environment you would provision a server, install your software/application on it and run it. You also have to ensure that all dependencies your software has are also installed on the system. If you need more applications, require vertical scalability and/or high availability you would repeat that process until you have enough resources to run whatever you want to run.

If there is a new release of your software or you require additional resources a developer or operator needs to access the server it should run on and install it. You can reduce this interaction with configuration managment tools, continous delivery pipelines and centralised logging etc. But in the end a developer will need to have at least some knowledge of the servers the application runs on and in case there is something to debug connect to the system which causes the troubles.

Here we need to take a short interlude to talk about a fundamental concept that drives serverless - the container.

A container enables you to ship your application including the complete runtime environment it requires. This removes the requirement to install application specific dependencies on server as there are already packaged with your application itself. And this is the real benefit of a container: You can run it anywhere.

I don’t explicitly mean Docker containers here, it can also be a good old LXC container (on which Docker is based on) or any other container technology that is out there.

Using containers moves the management of dependencies and what runtime environment is needed from operations to the developers. I’m sure your operations team will have some say in it as well so essentially this is the point where you need to have a DevOps culture or even better a DevSecOps culture. You can use containers without making big changes to a classical environment you just switched a few responsibilites.

Going one step further by using serverless technologies the entire interaction of a developer with an application shifts away from servers to your orchestration tool (I will use Kubernetes as an example from now on but it will also apply to any other container orchestration). There is no direct deployment of your software on a server or manual configuration of other networking parts e.g. a load balancer. You simply instruct Kubernetes with a new configuration and it will run a new version of your software on any available server (node) or on multiple servers depending on how you want it to scale.

But wait a minute… there are those darn servers again!

Managing those servers depends highly if you’re running your cluster on premise or in the cloud as a manged service. This is an important differentiation when it comes to cluster management.

Running it on premise does require that you operate and maintain the Kubernetes cluster yourself. Your operations teams does no longer have to manage individual applications directly, it simply has to maintain your Kubernetes cluster (or multiple clusters) including all servers/nodes your cluster(s) contain.

Using Kubernetes as a manged service in the cloud like Amazon Elastic Kubernetes Service (EKS), GCP Kubernetes Engine (GKE) or Azure Kubernetes Service (AKS) will simpify the management drastically due to automated update processes. Essentially all that remains is the security management of the environment itself.

In both cases you will have access to the servers of your cluster(s) because even when using a cloud service they show up as any other virtual machine you launch. But once set up correctly you can just add the amount you need to your cluster and the entire provisioning process is done for you and you actually don’t have to do anything with them - they just work. In an on premise setup your normal management process as for any other server still applies.

So if you use a manged kubernetes service you will still have servers that make up your cluster and you can define in which limits those can automatically scale due to changing demand but you essentially don’t have to manage them.

Ok, now we are getting somewhere with our expections of serverless: The developer doesn’t see the servers anymore and you don’t really need to manage them either.

But can we go any further and completely remove the server from our scope?

Yes, we can! (I hope I won’t get sued by the former Obama campaign by using their slogan 😅) But since computing without computers/servers is not possible this is only possible by using services where the underling compute resources are managed by someone else.

AWS currently offers the Fargate service that allows you to run Docker containers without provisioning compute instances (EC2) on which they run on. This means you can run your Docker containers without ever seeing or managing any server. There is also the possibility that AWS will soon be offering an option that EKS clusters are backed by Fargate (https://github.com/aws/containers-roadmap/issues/32). This would also remove servers (EC2 instances) from your EKS cluster. (1)

Another possible way to get completely rid of the server providing you with compute power is FaaS. AWS, GCP and Azure offer this service (AWS Lambda, Google Cloud Functions, Azure Functions) were you supply your application and it will run when triggered. How a function can be triggered depends on the cloud provider you choose because they have all different functionality on this. Using the platforms native service for message queues and HTTPs will work with all of them. FaaS will run your application also in a container - this may be not directly obvious when using the services because you won’t supply a container image to them but they will execute your application within one anyway to isolate it. GCP also provides the Cloud Run service which supports Docker containers but only supports invocation via HTTPs.

In conclusion: You can only go real serverless when using managed services by cloud providers that abstract servers in a way that you don’t have to deal with them anymore.

(1) Update: Running Pods on Fargate was announced during AWS re:Invent 2019: https://aws.amazon.com/about-aws/whats-new/2019/12/run-serverless-kubernetes-pods-using-amazon-eks-and-aws-fargate/