This is the first in a series of articles on docker. In this article, we aim to introduce the reader to the concepts of containers and some of the benefits they offer.

Sub-sequent articles will focus on deep dive into the usage and technologies underlying Docker and its eco-system. 

What is Docker?

Docker is a platform that uses containerization technology. It is used to package an application and all its dependencies together inside containers.  By doing so, the application works in any environment (at least in theory) – One way to look at it is to think of it as an easy way to ship “production ready” applications with all dependencies packed in.

WHAT IS A VIRTUAL MACHINE (VM)?

A virtual Machine (VM) is a file. This file is called an image. However, the file has an interesting property – with the right tools, the file acts as if it was an actual physical computer. What this means is that you can have multiple virtual machines running inside one physical machine – this saves cost of provisioning a physical computer and associated costs – licenses, maintenance etc.

A VM can then be thought of like a special program – a program that runs an operating system or part of it. A VM is said to be sandboxed – tech geek speak for saying that it is isolated from the host operating system. This means that it can be used for several uses including but not limited to

  1. Testing Other operating systems (including Beta Releases)
  2. Accessing data that cannot be accessed normally (typically virus infected data),
  3. Performing OS level backups
  4. Running programs that were not meant for the host Operating system itself.

More than one VM can be run simultaneously on the same Physical computer. This is done by means of a special software called a hypervisor. Each VM provides a set of virtual hardware – basically each VM shares resources with others saving costs in physical hardware and associated maintenance costs – people, power, cooling – among others.

Figure 1: Core Virtualization
Fig 1.a – MS-DOS (Yes the old version running on Windows 10 Machine)
Fig 1-b – Windows 3.11 (yes old version of Windows on Windows 10)

So Why do we need a Container?

Running multiple virtual machines on same machine take a long time to boot up. In addition, these may cause performance issues. Management is another issue that is not simple when running multiple VMs.

If you have seen figure 1-a and Figure 1-b and think it’s easy (well actually it is ?), setup is really messy and very non-intuitive – setup time takes a long time and some complicated process to follow to get it working right.

Think of a Container as a virtualization at the OS level.

Figure 2: Containers

Some of the advantage of containers over VMs are:

– These tend to be more light weight,

– boot up faster,

– can be managed better (auto removal when done for example).

These advantages are shown graphically below:


Ref: https://www.slideshare.net/EdurekaIN/getting-started-with-docker-docker-tutorial-docker-training-edureka

– Containers allow you to run more applications on a physical machine than VMs. When resources are a constraint, containers may be a better choice.

– Containers allow on to create portable and consistent operating environments – Development, Staging, Production. This consistency helps in reducing development and deployment cost, besides making it easier to monitor systems to ensure higher level of availability to the end customer.

Docker Terms and Terminologies

Docker Image

A docker image is a read only template used to create container. THIS IS IMPORTANT – READ ONLY TEMPLATE.  These images are either built by you, or readily available from Docker Hub or any other repository.

Docker Container

A Docker container is an instantiation of one or more image. It contains everything that is needed to run the application – from the OS to the network to libraries to the actual app. This is the actual “running” instance.

Docker Daemon

The docker daemon is the core of Docker. It works together with the Docker CLI (command Line Interface). Think of it as a service that runs on the host Operating System (and yes, it runs now natively on Windows 10 upwards – Older Versions of Windows runs a stripped down version called Docker Desktop).

Docker Architecture – a 10000 feet view


Ref: https://www.slideshare.net/EdurekaIN/getting-started-with-docker-docker-tutorial-docker-training-edureka

We have the client (usually the Docker CLI). Note that since the Daemon has a REST API, you could write your own clients in Java or C# to call the Docker Daemon.

The docker host is the server where the Docker Daemon is running. The client and host do not have to be on same machine – that’s the key takeaway.

Finally, we have the registry where the images are stored. (and in Cogniphi, we have our own registry hosted in Azure, but it could very well be a standard image pulled from Docker Hub. And for those technically inclined, the registry is an actual docker container itself).

So what happens when you issue a command like below (In next articles, we will discuss all the commands and the meaning of various parameters)

docker run ubuntu /bin/bash

The client makes a request for an image called ubuntu. If one is found, well and good. If not, it pulls down the image from the registry, Next it starts a container from the image and runs /bin/bash. When the command finishes execution, the container is automatically stopped.

Why is this so important?

When you want to go to market quickly, you do not want to spend a lot of time setting up infrastructure. Docker comes with a large number of pre-built images – that you can deploy in a matter of minutes.

For example, you want to set up Apache HTTPD. On a windows world, you would need to first find the relevant Apache HTTPD Binary, download/unzip it, configure it, run it – a set of activities that can easily take up to 1 day. With docker it becomes a single liner

docker run httpd

While the above example is over simplified, it still can be started in a few minutes not days. You want to setup mysql? Again a single one liner.

The speed to market and deployment is one of the key selling points. You would agree that spending time on plumbing is not worth anybody’s time.

You want to make sure that your application runs on Ubuntu 18 as well as Ubuntu 20. It’s a simple matter of spinning 2 docker images – one with Ubuntu 18 and one with ubuntu 20 and having your application run on both images. Note that this significantly reduces testing time – since containers and dockers are light weight, these can be run in parallel. Again the end result is that your products and projects are more rapidly available to market.

So what do we get beyond ready to market and scalability? Docker today is as secure or insecure as the underlying OS. In the olden days, Docker was considered insecure because of “run as root” – this is no longer true. In next series of articles we will cover the hows and whys of securing and managing a docker infrastructure – briefly covering on topics like Docker Swarm and Kubernetes and then seeing how the whole ecosystem looks like.

If this sounds interesting, sign up for an account at hub.docker.com (account creation is free – repositories are not – so be careful). If you do not want to do so, you can head over to https://labs.play-with-docker.com/ – it provides you a cloud based playground to learn more about docker. Have fun Dockering !

Source – The images used in the blog are from a course previously designed and published by the author himself.