The Container Coup
Once there was a developer who wrote software.
And the world said "Share it with us if you really care"
"I will. Just tell me how!", she said
Oh we'll tell you about 'em deployments don't dread!
First there were the Servers. We used them back in the day.
But now they're totally passé.
You see they were a bit too much work
Cuz of all the setup for OS, storage and network.
Next came the VMs and they were a huge relief.
But even their reign was quite brief.
They called a server their host
and ate away resources the most
Is there no solution? No need to fret.
We have an answer and you'll like it we bet!
Now we got the Container.
It's a total no brainer
Pack your code and ship anywhere
No worries about OS or hardware
That's the mantra (and you're going to find it everywhere)
Containers. Well, what can we say? They're here to stay!
If you've been with us for some time, you'd know, we swear by Container Technology for application deployments. By the way, if you want to rejig your memory on containers, we have an entire series dedicated to it. You might want to look at it first and then circle back here.
So what are we going to explore here? In a new series.
You see we aren't done with containers. Yet. There's one factor. A very important one......
....And you'll soon find out more about it
Scaling the "Scale" Factor
Containers! They're a perfect fit!
They're so easy to deploy we must admit!
That's very well. But what about scale?
SCALE?
You know. A hundred containers. Or maybe thousand.
Can we handle them? Or are we bound to fail?
Our systems can never crash. Our containers must always run.
Also there's networking, load balancing & configurations..
Who's going to manage everything under the sun?
No need to worry. That's been thought about too.
We got a technology to do that all for you!
True that! Managing one container or maybe a few is no hassle. But what will happen when it scales up drastically. You now have a hundred or maybe a thousand containers to manage. In addition to this, you'll also need to set up and configure a lot of other things like load balancers, networking, auto-scaling, etc. Surely you can't do all that manually.
That's where Container Orchestration steps in. In the simplest sense, container orchestration is a way to automate the deployment and management of containers at scale.
Let's understand the concept with an example
Container Orchestration
Imagine you are moving to a new house, and all your belongings have to be packed in cardboard boxes and loaded onto your car's trunk. Your friend is helping with the move too and has offered his car's trunk (if your car trunk runs out of space). So that makes two cars to carry all your boxes.
As per your "To-Do" list, here are few things that you might have to do in this case:
- Procure boxes to pack your belongings in.
- Pack your belongings in a box depending on its capacity.
- Get extra boxes if all your belongings don't fit in. Or remove the extra boxes if the reverse situation arises.
- Load the boxes in the car's trunk depending upon the space available in the trunk.
- You might also want to double-check if the boxes are not damaged during the packing and loading process.
Now place this scenario in the context of container orchestration.
You are the "Container Orchestrator".
The Cardboard Boxes are the "Containers".
Your Car (and your friend's car too) forms the "Host system" where your containers run.
As a "Container Orchestrator" you'll have to carry out the following tasks. (Compare it with your to-do list):
Provision containers to run applications
Procure boxes to pack your belongings in.
This is an obvious one. To run an application, you have to pull the appropriate container image and start a certain number of containers to run the application on the host system.
Resource Allocation
Pack your belongings in a box depending on its capacity.
The next thing you'll have to keep in mind is that different application containers are going to require different amounts of resources. In this case, you'll have to set the resource quotas for each of the containers depending upon what the container demands.
This is like packing your heavier belongings in a large box which is obviously going to occupy a larger space in your car trunk.
Scale up or down containers
Get extra boxes if all your belongings don't fit in. Or remove the extra boxes if the reverse situation arises.
In case an application container sees a surge in its load, you'll probably have to spin up another copy (or copies) of the container so that the load is evenly balanced out between these containers.
Or, if the load drops to the bare minimum, then you can do away with the extra containers and save up on the resources.
Movement of Containers
In our scenario, you have two car trunks to load your boxes on. If you feel that your car trunk is quite overloaded and cannot accommodate more boxes, while your friend's car trunk has ample space, Then you can always shift some boxes to your friend's car.
Load the boxes in the car's trunk depending upon the space available in the trunk.
Likewise, in most cases, you will have multiple host systems running containers. (The "cluster" concept which we will talk about in the next post).
And as a container orchestrator, you will have to decide which containers will run on which host at a given time. For example, if one host system is overloaded, you can move some containers onto the other host system to balance out the load on each host.
Monitoring Health of the Containers
You might also want to double check if the boxes are not damaged during the packing and loading process.
While running containers, you cannot overlook any failure/error conditions that might occur. That's why you always have to keep a lookout for the containers' health. If a container fails and errors out, necessary action must be taken. (like starting a new container or terminating the failed one).
Networking
Another major responsibility of the container orchestrator is to set up and manage networking-based resources.
Especially in the case of microservices-based applications where the services often communicate with each other (Inter-service communication).
Or in the case of web services-based applications, the container orchestrator has to expose the services to the outside world and handle load balancing.
Orchestration with Kubernetes
Thankfully, there are special "agents" that handle all this container "management" hassle for us. We call them "Container Orchestration Platforms".
There are a few handfuls of them available in the market today, like Docker Swarm and Apache Mesos.
But the current industry favorite (and ours too) has to be Kubernetes.
If you aren't familiar with Greek, then Kubernetes means "helmsman" or literally the guy who steers a ship. (P.S. We don't know Greek too, we looked up the meaning online)
Just look at the logo, and you'd understand. It's a helm! (the steering wheel we see on ships in pirate movies)
Just like a helmsman (Kubernetes in Greek) steers and controls a ship. Kubernetes (as in the technology platform) is responsible for managing the containers on host systems. This is what their official website says:
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Yes! You read that right. Kubernetes is Open Source! And guess what? Kubernetes is written in Go! (That validates the fact that awesome things are written in Go !)
Kubernetes was first launched in 2015 and was heavily influenced by the "Borg" cluster manager system used by Google. That means Kubernetes is not even a decade old.
But its popularity has exploded in this short timespan. It's now become a gold standard for container-based deployments. All of the major cloud providers provide support for Kubernetes along with an entire suite of features. Now that's quite a feat!
A fun fact here. Why is Kubernetes called 'K8s'?
Spell Kubernetes for me
K-U-B-E-R-N-E-T-E-S
There are 8 letters (U-B-E-R-N-E-T-E) between the starting letter 'K' and the ending letter 's'. Compress this and you get K8s!
What's In Store for you?
The Kubernetes universe is quite big. And we cannot fit all the concepts in a single post. That's why we have decided to dedicate multiple series of blogs just for Kubernetes.
In this series, we focus solely on the core concepts in the Kubernetes system that. That's right! Get your basics in first. Only then can we head over to more advanced topics and deploy our own applications using Kubernetes with the utmost ease!
Ready to delve into Kubernetes? Stay tuned for our next post. Coming soon!
Join the conversation