I love Docker

I have heard about Docker for long long time. It has seemed interesting and entertaining to me, but I couldn't give it a try. I was using Chef as a devOps tools. It is great, but it is also complex as well. You are not exactly independent from the machine itself neither. I mean, your enviroments are still nonisolated from your hosting machine. So it means, you can't recover everything when bad situations happen.
A stack of shipping containers forming a colorful pattern
Photo by Guillaume Bolduc / Unsplash

In this article, I am not going to share any docker command. I will just talk about general terms about docker.

Anyway, Docker is kind of thing contains devOps and virtualizations together. You are actually building some isolated namesplace called as containers contains what ever you need in your environment which is called images. Like jar files build by java, same is for Docker. After you have the image, you can run this image in almost anywhere. When I saw the docker for the very first time, It was running on Linux only. But now, I see, it is also working on MacOSX and Windows. I am using it on my MacOSX system for my development enviroment.

Container vs VMs

Docker virtualization is different from regular virtual machines. I am not a OPS guy here, but briefly, Docker relies on LXC which different from KVM which is really a virtual machine. I am not technically aware of that terms deeply, but I can say, with LXC, you just share resources on single kernel. You don't have multiple kernel per guest node unlike KVMs. Since the feature cgroups which supports io resource limitations like network, cpu, disk etc, LXC has been available. They are simply like different applications (guest node) running on a Linux machine more than virtual machines. This is why you are going to need same kernel version linux machines in your docker containers. OK, lets stop here and make no more confusion.

I want to talk about some terms about Docker;


These are meta of what you are going to run. A template maybe. I liken this to object classes in OOP. I mean, cleasses are meta of active instances. They are just description. So, images are also passive things. Whenever you build, you actually create image. You can build your own images or pull one of them from docker hub or your on premise registry


These are instances of your images. Containers are running things of an image. You need to run a blocking process actually. At least /bin/bash. This is why Docker is not like a VM. So, it means each docker container is for running single process. You can run some other process as background, but this is against its nature. This is where you need to change the way of seeing Docker. It is not a VM. It is not something to ssh to get into it whenever you want. It is not actually a machine. OK, I don't know how many times should I remind that to you to understand its nature, but you should get it clearly. I stucked somewhere here when I was playing with it. It is just an isolated environment. If you do not have blocking process to run, then you may not need to use Docker. Because Docker will kill the container you run after the process ends. If you give a background process to docker, your process is probably not going to end. You can define that process on the time when you build your image or run time of the container. They are called as entrypoints.


This is third party image that keeps tracking of your images. It is primitive and on premise version of docker hubs. You can pull and run images from your registries.

Docker Engine(Daemon)

This is the docker daemon running on your machine to do everything about docker. Building, running, listing running containers, what ever you want to do with Docker, you are going to need to have docker engine and make sure it is up and running on your machine.

You can also connect to a docker engine running on another machine remotely. For example, you can tail a remote docker container logs on your local machine.


Swarm Logo
This is where I fell in love with Docker. I really enjoy it when I play with it. This is something what Kubernetes does. Let me explain it with some cases. I assume that you need your web application is running as replicated. For example, you want to run 3 instances of your docker image that contains your web application. What you are going to need to do is, running your docker images three times with different container names. What if you need to scale up or down. You are going to handle it manually. What if you need hundreds or thousands running instances for special days. Or, what if you want to have high available environment, because you care about high availability and what happens if you loose a node. This is where swarm comes in. With swarm you can run multiple instance of your docker images, easily scale up and down with a single command across multiple host machines which are docker engine is running on.

In here there is a term named service related with swarm. Services are consist of a bunch of running containers. You name the service and give the replication size of your application. It is distributing that service across your docker cluster as the replication number that you give. This is really enjoyfull part of it. You must play scaling feature of it. It is really funny :-)

Docker swarm was actually third party image that you need to run. Configuring and using it was a bit of harder. But with docker version 1.12, docker swarm is now a native feature in docker. You can simply use it by docker service command. There are still some lack of it; for example, you can't tail all container logs in a service at a time as of version 1.12, there is a commitment covers that which is going to be at least experimental on version 1.13. I am waiting for it :-)

Swarm Worker

This is your machines that is going to host docker containers. Containers in a service will be deployed to the docker engine in that machines. A Swarm worker may run multiple task of a service. For example; 5 task of your service replicated by 15 may be run on same swarm worker. (Unless you don't use --global options. With global, you are making sure, each worker run single task in a service.)

Swarm Manager

This is the machine that orchestrate the workers. You are going to be talking that machines to deploy your services. I use machines as plural as you may notice, because, swarm manager can be run on multiple machine to provide high availability. It runs in master/slave architecture.

Thats all I can talk about Docker at the moment. I am really new to Docker, and will definetly use it for some of my cases and blog it.