Overview of Containers

Achintha Bandaranaike
8 min readOct 6, 2023

--

Introduction:

Containers are a lightweight, portable, and isolated way to package and deploy applications. They are becoming increasingly popular in the modern world, as they offer a number of advantages over traditional virtualization technologies such as virtual machines.

In this article, I will provide an overview of containers, including their history, benefits, and how they work. I will also discuss the difference between containers and virtual machines.

Traditional Architecture of the Server:

Imagine when you have one physical server. So we have our hardware Abstraction Layer(CPU, Memory, Network Card, Hard Disk, etc). On top of that, we have an Operating System. Having an operating system we are installing kernal. Every operating system has a respective kernel. Windows operating system has emulated kernel and Linux operating systems have open source kernel. On top of that operating system, we can plant our applications.

Traditional servers are typically monolithic systems, meaning that all of the components are tightly coupled together. This makes them difficult to scale and manage, as changes to any one component can impact the entire system.

What is Virtualization?

Server virtualization is the process of creating multiple virtual servers from a single physical server. This is done by using software to create a layer of abstraction between the physical hardware and the operating system. This allows multiple operating systems to run on the same physical server, each with its own dedicated resources.

For virtualization, they implement a hypervisor for this. Hypervisor is the complete package and it has complete capabilities. There are many hypervisors like EXSI, KVM, XEN, etc. One of you installs a software hypervisor is a heavy package. Hypervizer has the capability of getting the physical hardware, and CPU and turning it into the virtual CPU. So hypervisor will provide a virtual hardware abstraction layer. For example virtual memory, virtual cpu, virtual network card, etc. And then we can install the operating system. So in this process once again we are doing kernel installation. On top of that, we can install Applications like Application A, Application B, Application C, etc. So this is why hypervisor was popular 9, 10 years before.

What is Virtual Machines?

A virtual machine (VM) is a software computer that, like a physical computer, has a CPU, memory, storage, and network interface card. VMs are created and run on a physical computer called a host. A hypervisor, which is a software layer, manages the VMs and allocates resources from the host. VMs are often used to consolidate multiple operating systems and applications onto a single physical server. This can improve resource utilization and reduce costs. VMs can also be used to create isolated environments for different development projects or to test new software without affecting the production environment.

In single hardware, we can run lots of isolated machines and VMs are totally isolated. That's why we are called virtual machines. We can have a number of VMs until our resources are full.

Here are some of the benefits of using VMs:

  • Improved resource utilization: VMs can be consolidated onto fewer physical servers, which can improve resource utilization and reduce costs.
  • Increased agility and scalability: VMs can be easily created and deployed, which can improve agility and scalability.
  • Improved disaster recovery: VMs can be easily backed up and replicated, which can improve disaster recovery.
  • Improved security: VMs can be isolated from each other, which can improve security.

Imagine you want to migrate your application B and application C is another environment. Your Operating System is 30GB and total application storage is 10GB. So the total size is 40GB. If you clone the VM you want to migrate the entire VM. Not Application A and Application B. You can't specify a proper application to scale up. You can't do that. This applies to both app 1 and app b and scape up both applications. So VMs are heavy packages and also we have a boot delay. We can’t quickly spin up a machine or application.

What is Containerization?

Containerization is a lightweight virtualization technology that packages software applications and all their dependencies into a single, portable unit called a container. Containers share the underlying operating system kernel and resources, but each container has its own isolated runtime environment. This makes containers more efficient than traditional virtual machines, which each have their own guest operating system.

Monolithic and Microservices:

Monolithic architecture is a traditional software development approach where all of the components of an application are tightly coupled and packaged together as a single unit. This means that any changes to one component of the application can potentially impact the entire system.

Microservices architecture is a more modern approach to software development where applications are broken down into small, independent services that communicate with each other through well-defined APIs. This makes microservices architectures more scalable, flexible, and resilient to change.

Examples of monolithic applications:

  • Early versions of Gmail and YouTube
  • Traditional CRM and ERP systems
  • Many legacy applications

Examples of microservices applications:

  • Modern web applications like Amazon and Netflix
  • Many cloud-based services, such as AWS and Azure
  • Microservices-based architectures are becoming increasingly popular, as they offer a number of advantages over traditional monolithic architectures. However, it is important to note that microservices are not a silver bullet, and they are not always the best solution for every application.

What is a Container?

Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as an isolated process in user space. Containers take up less space than VMs, can handle more applications, and require fewer VMs and operating systems.

It leverages existing computing concepts containers and specifically in the Linux world, primitives known as cgroups and namespaces.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

Imagine you have a server. We can get a Linux server and you have a hardware abstraction layer and on top of that, you can install any kernel. On top of that, we can install a run-time engine(daemon). In the run time engine, we can create virtual machines. Not exactly Virtual Machines but like VMS.

inside Linux OS, there are some default features like namespaces, cgroups (control groups), etc. The container has the namespace feature. You can get a virtual interface and virtual file systems inside Linux. Using namespaces they create separate sandboxes via separate process(PID). You can have another sandbox in a separate PID. By default, Linux does this isolation without installing a hypervisor. Inside this isolation box, we can grant our application. It’s very lightweight. Like 1MB-5.5MB. cgroups(control groups) we can restrict CPU and other things. Namespaces are isolation.

In the container world, we want a template for creating containers, which we call Container Image. It’s a static image. No one can change this. Using this image it can simulate multiple containers or container instances. Similar to VMS. We don’t have dedicated hardware or OS. All these elements come from using the physical elements from the hardware abstraction layer. So each container has binaries and libraries. For each and every container, you can build up individual applications.

Comes with Runtime Engine(Daemons); Daemons can manage multiple services. Examples of daemons are sushi, httpd ,nginxd, and systemd. The most popular runtime engines are Containerd, CRI-O, and Docker. Daemons can manage multiple things. They can manage networks, They can manage file systems, 3rd party plugging and integrated build kits, manage multiple containers and volumes. Now we can install Centos also. Why? The binary is changed but the kernel is similar. Then we can have any kind of sandbox. This is why we are using containers. It’s a daemon. It’s ace handle lots of modules. Multiple containers can handle different binaries and libraries.

Containers offer a number of advantages over traditional virtualization technologies such as virtual machines, including:

  • Lightweight: Containers are much lighter than virtual machines, as they do not require their own guest operating system. This makes them more efficient to run and easier to scale.
  • Portable: Containers can be easily deployed to any environment that supports the Docker container runtime environment, including Linux, Windows, and macOS. This makes them ideal for cloud computing and microservices architectures.
  • Isolated: Containers are isolated from each other and from the underlying operating system. This makes them more secure and reliable.
  • Efficient: Containers share the underlying operating system kernel and resources, which makes them more efficient than virtual machines.
  • Scalability: Containers can be easily scaled up or down to meet demand.
  • Reproducible: Containers can be easily replicated, which makes it easy to deploy and manage applications.

In addition to these advantages, containers also offer a number of other benefits, such as:

  • Improved developer productivity: Containers make it easier for developers to develop, test, and deploy applications.
  • Reduced costs: Containers can help to reduce costs by improving resource utilization and reducing the need for physical hardware.
  • Increased agility: Containers can help to increase agility by making it easier to deploy and manage applications.
  • Improved security: Containers can help to improve security by isolating applications from each other and from the underlying operating system.

Container VS Virtual Machines

Containers and VMs have similar resource isolation and allocation benefits but function differently. Because containers virtualize the operating system instead of hardware. Containers are more portable and efficient.

Overall, containers offer a number of advantages over traditional virtualization technologies, making them a popular choice for developing, deploying, and managing applications.

What is Kubernetes?

Kubernetes is an open-source container orchestration engine designed for automating the deployment, scaling, and management of containerized applications. This open-source project is hosted by the Cloud Native Computing Foundation (CNCF).

Understanding of Kubernetes and Docker

To grasp Kubernetes, also known as K8s, it’s essential to have a foundation in Docker. In Docker, we deploy our applications inside containers. However, in Kubernetes, we manage containers on a larger scale, often numbering in the thousands or more, depending on the application’s traffic.

Stay tuned for the next article in this series, where I’ll discuss how to use Docker and Kubernetes to deploy and manage microservices applications.

Thanks for reading! Let’s see you in the next article. Don’t forget to follow me via medium and leave a 👏 And Stay connected on LinkedIn :

https://www.linkedin.com/in/achintha-bandaranaike-676a82163/

--

--

Achintha Bandaranaike

AWS Community Builder ☁️| Cloud Enthusiast | 3xAWS | 3xAzure | Terraform Certified | 1xGCP