How Docker Addresses Kernel Level Redundancy

Joe PerksTechnical TipsLeave a Comment

Docker Kernel Level Redundancy

Docker is making waves, especially among those in system administration, developer-operations, and cloud/virtualization roles. Here we attempt to explain Docker’s most disruptive feature without expecting prior knowledge of Docker’s underlying technologies.

In a virtualized datacenter, we have one OS per VM and the hardware is virtualized for each VM. We may have any number of VMs running services (web hosting, backup services, Oracle, or others) on the same physical machine.

Through the abstraction of hardware and OS data to files, we can migrate VMs across physical machines and increase the efficiency of cycles previously left idle when using a single CPU/physical machine for a single OS. However, we still can see significant inefficiency within a virtualized datacenter.

In our case, we imagine 100 VMs running within a virtualized datacenter, running the same operating system. Our hardware is being efficiently used because we see that each server’s CPU and RAM is being used at an acceptable threshold.

Where, then, is our inefficiency? Most servers are running near hardware capacity (CPU, RAM), but if we were to peer inside the operating system of two VMs, we would see that even two VMs with different purposes (unit testing vs web hosting) are more alike than different. Each is running processes that handle networking, file systems, permissions, and hundreds of other functions (all running in the kernel), all these processes providing the same services to each OS/VM.

Why, then, can we not combine tasks across virtual machines? In some cases, we can, but between VMs serving different purposes, there are dependencies for a certain task present on one VM and absent on another; i.e. one version of a database or daemon may be required on VM A while another version is required on VM B. If we installed the union of dependencies for all tasks across the datacenter, our machine would become bloated, even if they could function with likely conflicting dependencies.

Given this viewpoint, we desire a way to share the engines behind networking, file systems, permission systems and other services from the OS while maintaining the ability to choose dependencies needed for each task. Docker is the answer to this desire.

All containers run on a SINGLE operating system, which provides many of the fundamental services that all computing tasks require, but each container is simultaneously isolated from others’ dependencies. Previously, X number of networking/file system/etc. services ran on X number of VMs. With Docker, we can provide a single networking/file system/etc. service while running X containers on top of those services. Docker enables each container to have what it thinks is its own networking, file system, etc. services, but in fact these services for each containers are provided by one kernel. That, is Docker’s innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *