Containers have existed for many years, but it’s only with the success of Docker and its peers that the technology has gained serious traction within the enterprise. The benefits of containerization are clear — they provide a mechanism for cross-platform deployment and the isolation of parts of an application, or of multiple applications and services from each other. But I still find that there’s some confusion about the proper place of containers in the enterprise IT arsenal. I’d like to take a look at what containers are, how they differ from virtual machines, and when they should be used.
A virtual machine, virtual server, or cloud server — these aren’t exactly the same thing, but they’re close enough for our purposes here — is a software-based server running on a physical server. The physical server runs an operating system, on top of which runs a hypervisor (or similar). The hypervisor manages a number of virtual guest servers, each of which has its own operating system. Each virtual server is full server in its own right and can be treated exactly as any other server.
The benefits of virtualization include the ability to quickly launch new servers and discard them when they’re no longer needed. Something that’s not feasible or economical without virtualization.
A container can be thought of as lightweight virtualization, except that a container does not require the use of a hypervisor or a guest operating system. Each container is a package of the libraries, utilities, and binaries that a service or application needs to run. Containers are, as the name suggests, self-contained — each container needs nothing more than a minimal operating system environment running on a server or other computer. The same container will run on a Linux server and a developer’s laptop. That makes it very easy to create end-to-end development workflows, because you never have to worry about whether a container will run in the live environment.
Containers are much more resource efficient than a virtual server and they can “boot” much more quickly. Containers are managed by tools that don’t take up many resources on the host server, unlike virtual machines, which can have a significant overhead.
As Scott Lowe points out, in some ways this isn’t a sensible question. Virtual servers operate at different levels of the technology stack. They have a different scope.
“You see, a Docker container is intended to run a single application. You don’t spin up general-purpose Docker containers; you create highly specific Docker containers intended to run MySQL, Nginx, Redis, or some other application.”
Virtual servers are great for reasons that have been rehearsed ad-nauseam, and, before containers became popular, they were used for applications for which we’d now choose containers. But they weren’t a perfect fit for many of those applications. It’s possible to spin up hundreds of virtual machines on a physical server to support multiple service or applications, but it’s not ideal. Containers are more resource efficient, and they’re great for deploying isolated hosting or development environments that are easily duplicated.
According to Andy Patrizio:
“Before, admins usually walled off apps from each other by putting one app per virtual machine. Now you didn’t need to spin up a VM for every app. You can run multiple apps in one VM environment. This meant no longer needing hundreds of VMs on one machine.”
Virtual machines and containers are, in fact, complementary technologies. Cloud infrastructure-as-a-service platforms provide a flexible, elastic, on-demand foundation onto which lightweight containers are deployed. It’s worth noting that this combination of public cloud and containers has many of the advantages of platform-as-a-service platforms, without the restrictions that PaaS imposes on developers.