ESC

Search on this blog

Weekly updates

Join our newsletter!

Do not worry we don't spam!

Explained for Beginners: How Docker Containers Work


As microservices and cloud native architectures gain popularity, developers are increasingly leveraging containers for achieving agile and scalable application deployments. Docker has emerged as the de-facto industry standard when it comes to containerization technologies.

However, for those still new to Docker, grasping concepts like containers and virtualization can be challenging. Questions like how containers differ from virtual machines or how they enable lightweight packaging of apps easily crop up.

In this comprehensive Docker tutorial, I will be answering these questions by explaining the architecture and internals of containers for an absolute beginner audience. By the end, you will have clarity on what problems Docker solves and how it solves them under the hood. Let’s get started!

What is Containerization? 
Traditionally, applications would be installed directly on bare metal servers or virtual machines running a standard operating system like Linux or Windows. The entire runtime environment depended heavily on the target deployment server specifications.

Containerization solves this problem through OS-level virtualization. It allows bundling an application together with all its software dependencies into a standardized unit called a container image. This image can be easily shared, downloaded and run on any target environment - similar to shipping containers transporting goods intact by sea, rail or road.

So in a nutshell, containerization enables packaging software into lightweight, portable and self-sufficient containers to simplify deployment and execution across environments.

Key Benefits of Containerization:

-    Consistent software execution regardless of underlying infrastructure  
-    Drastic portability improvements compared to VMs
-    Fine-grained resource control and isolation
-    Management overhead reductions through automation
-    Support for microservices and DevOps practices  

With containerization basics covered, let’s now see how Docker employs OS-level virtualization to build containers.
 


How Docker Containers Work?


Docker relies on OS-level virtualization instead of hardware virtualization techniques used in hypervisors. This allows creating isolated user space instances called containers rather than entire virtual machines.

Some key OS constructs Docker leverages include:

Namespaces - Divides OS host resources into isolated groups
Control groups (cgroups) - Limits amount of resources containers can use
Union file systems (UnionFS) - Enables layering of container filesystems  

Let’s explore each aspect further:

Kernel Namespaces 
The Linux kernel provides namespaces to compartmentalize host system resources like process trees, network interfaces, mount points etc.

Docker creates a separate namespaceexposed only to the containerized process giving the illusion of a dedicated environment. Some namespace types include:

pid - Isolated process IDs 
net - Own network stack  
mnt - Separate mount table

Control Groups  
Also referred to as cgroups, they enforce resource usage limits on containers by actively measuring and restricting usage. This prevents noisy neighbors issues.

Some resources controlled include CPU, memory, disk I/O, network bandwidth etc. Cgroups enables fine-grained tuning.

Union File Systems
Union file systems (UnionFS) aggregates contents of underlying layers forming a single unified view. Filesystem changes apply to the top writable layer only.

Docker utilizes UnionFS to provide containers with layered filesystems using stackable image layers. This minimizes storage overhead.

Namespace, cgroups and UnionFS together facilitate OS-level virtualization for spawning containers. But how does Docker handle creating and running the containers themselves?

That brings us to Docker architecture next!

Docker Architecture Basics 
The Docker platform relies on a client-server architecture:

Docker Client - Command-line tool to issue Docker commands like build, run etc.
Docker Host - Server running the Docker daemon to listen for commands
Registries - Cloud services to store and distribute images  

Here is how each component fits together:

1. Developer issues docker commands through clients
2. Commands get sent to Docker daemon 
3. Daemon builds container images and spawns containers
4. Containers access files via Union File System   
5. Registry stores finished images for reuse

This neatly abstracts away low-level virtualization, image management and container execution workloads from developers. Next up, let’s peek at what goes inside a container image.

Anatomy of a Docker Container Image
Docker images are immutable, lightweight packages bundling up application code, libraries, dependencies and files required to run services in containers. Images get created from text-based instructions called Dockerfiles used to automate building, shipping and running containers.

Each instruction adds a new filesystem layer ultimately outputting a slim yet complete stack. Images also inherit parent layers from trusted base images improving reuse.

Once built, images get stored in registries and remain unchanged acting as reliable bases for spinning container instances. Any updates generate new images maintaining revision history.

This architecture enables easily roll back or reference older versions if issues crop up down the road!

Putting It All Together With Examples 
Alright! Now that you understand the blueprint, let’s construct a sample Dockerized app to appreciate how real-world projects utilize these advantages:


# Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "server.js"] 

This Dockerfile automates setting up a containerized NodeJS application:

- FROM pulls latest Node 16 image 
- WORKDIR sets working location  
- COPY adds app files
- RUN installs dependencies
- CMD starts the server

Build Container Image
Let’s now build an immutable image package from this config using:


docker image build -t my-app .

Run the Container 
With image ready, starting a container takes just one command:


docker container run -p 3000:3000 my-app


This spins up the containerized NodeJS server isolated from the host machine without worrying about runtime dependencies!

Conclusion
And that wraps up our Docker fundamentals 101 crash course! Let's quickly recap - we went through:

✅ Benefits and principles behind containerization 
✅ OS-level virtualization constructs like namespaces, cgroups etc. 
✅ Separation of concerns in the Docker architecture
✅ Anatomy of container images and sample Dockerfile
✅ Building images and running containers  

Docker empowers developers to ship production-grade environments down to the very last library and configuration file. Containers let you focus on solving problems rather than hassling with dependencies or infrastructure!

I hope you enjoyed this beginner-friendly guide explaining the internals of Docker containers. Feel free to get hands-on experience with building sample apps. Let me know if you have any other containerization questions!

How to Install Node.js on Ubuntu –  Linux Installation Guide - 2024
Prev Article
How to Install Node.js on Ubuntu – Linux Installation Guide - 2024
Next Article
How to Install Node.js on Windows 10 & Windows 11 – Installation Guide - 2024
How to Install Node.js on Windows 10 & Windows 11 – Installation Guide - 2024

Related to this topic: