Series Introduction: A History Class on Deployments
Once the application development is done, its time for others to use it. After all that's why you spent your precious time developing it. Know what that means? Its time to deploy! And how in the world are you supposed to do that?
While developing, your local system will have some hardware specifications, a specific OS, configuration files, software packages and an environment to run the application.
When you deploy your application, it means that it should work on another system without any hiccups. But this other machine may not have the same setup as your machine. What do you do? How do you ensure that all consumers can use your application seamlessly?
That's what we'll be talking about in this series. Let's look at how deployments were done over the past few years..
The Timeline of Deployments
Consider this. You want to go camping in the woods for a few days and you got to eat, of course! You don't have burger joints in the woods! What do you do?
Okay Let's get a bit creative over here (even if it seems illogical). Just for arguments sake! You have three options;
- Option 1: You build a new, well equipped kitchen in the woods ....(I said "for arguments sake" didn't I?)
- Option 2: You gather all the things necessary for cooking a meal. You carry a stove, utensils, woks, spoons along with all the raw ingredients required. (Bear with me here.)
- Option 3: Prepare you meals in your home and pack them in a lunchbox.
If you'd ask me. I'd take the third option. It's so much more convenient and has the least effort of all the options.
This analogy is just a way to explain each aspect on our timeline.
Option 1: Servers
Looking at our little narrative, Servers are like the Option 1 for deployments.
Back in the 1990s (when the internet was still in its infancy stage), applications were deployed on physical servers.
And when I say physical servers, imagine an actual room with machines running your application.
To deploy a application was a arduous task where each server had to be painstakingly configured according the exact specifications required for the application which involved setting up the Operating System, Storage and network components, Security mechanisms and much more. This took days, weeks! Not to mention the hefty amounts that one had to cough out to pay for a single server.
And in case the server overloaded or failed, the entire process had to be repeated for a new server. (Nooooo!)
This is like building a new kitchen in the woods just to cook a few meals. Its doable in theory, but its going to take too much time, effort and money. Not very practical eh? And that's when the world of technology moved to Virtual Machines. Our Option 2...
Option 2: Virtual Machines (VMs)
Virtual Machines tried to make deployments a bit less extensive. The VMs could be ported on any machine irrespective of the underlying Operating System.
Kinda like a small computer running on another computer. The VM acted as a "guest" with its own "Guest OS" would use the hardware resources of a "host" machine which itself had a separate "Host OS".
In some ways, VMs abstracted the hardware since the person deploying the application wasn't really configuring actual hardware.
Although VMs were a huge relief over Servers, they brought in their own set of problems.
VMs were notorious for being "heavy" as in they were quite resource hungry. They ate away a large chunk of the computing resources of the host machine and configuring them was still a complex task.
In a way, VMs are like our Option 2 where we take all the equipment required to cook meals. Its a viable option but still not that great. Come on! Who would want to lug all of these things into the woods (and how). And a lazy person like me wouldn't even want to setup it all up! And that's why I'd go for the third option
Option 3: Containers
Now I'd rather pack my food in a lunchbox and take it to the woods. Its convenient, low effort and much less time consuming. And that's how containers are!
Containerisation technology came into the limelight when orchestration became popular in mainstream.(Thats Kubernetes we're talking about).
Think of it like wrapping your application in a small "box" and deploying it anywhere you like. The main motto behind containerisation is:
Write once deploy anywhere
The application is isolated in a container which has only the required packages, libraries etc required to run just the application. This way, the container is extremely lightweight and the OS and hardware is completely abstracted away.
Think about it deployment becomes so much easier. You need to create one container. Just once. And this container can be used on any machine, or any environment irrespective of what the hardware configurations are. Isn't that great!
Containers and the Cloud
If you're a millennial or a GenZ, VMs and servers are passé for you. Because you now have the Cloud (and the internet of course!). Because of the cloud, computing power is now available to the common man. You can spin up servers with one click and deploy applications in minutes
Cloud and containers are like the two faces of the same coin. They go hand in hand. Containers are the very basis of the cloud. Almost all application deployments on the cloud are done using containerisation. In our upcoming blogs, we'll be talking a lot about cloud deployments using containers.
In Summary
This series is all about containers. In the upcoming blogs we'll get into the nitty gritty of containers and all the concepts associated with the container. And if you're worried that theres going to be no code in this series. The don't! Because we'll be creating our very own container for the petstore application we wrote in our microservices series. Without further ado...head over to the first part of the series to dive into the world of containers!