Docker built-in Swarm Mode

Swarm is kind of clustering tool in Docker world. It is like Kubernetes. To see a bit of detail about docker, you should check I love Docker article. It is briefly on top of your docker engines that is clustering your services.

As of docker version 1.12, swarm is built-in feature in docker. With docker service command you can do anything about swarm; create new services, list them, remove them, show details of one of them etc.

As it is seen on the image above and described in previous article, you are doing everything on docker swarm managers. And it deploys your services to docker swarm workers which is seen as docker daemon on the picture above. Docker daemon is also called as docker engine.

Setup Swarm Cluster

Initialize Swarm Managers: This is the first thing you must do.

➜  ~ docker swarm init
Swarm initialized: current node (6ynppftkqpi4inrgivo83d0ha) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-18zawryprebfzpnvlqjel18iow04ack42mbugt8gi4rrouuouo-1c1c4irupo73c66qroamriz16 \

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

As you see, it shows you what you should do next. In here I want to say that, you may need to give addvertise address because docker can't detect it somehow. This ip address will be the ip adress of your host machine.

➜  ~ docker swarm init --advertise-addr <host-ip>

There is one more thing to talk about in here; you can have multiple swarm manager. This is also shown on the console output of your swarm initialization command.

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Initialize Swarm Workers: After you initialize the managers, you should make your host machines which docker engine is running on, join the cluster as worker.

➜  ~ docker swarm join \
    --token SWMTKN-1-18zawryprebfzpnvlqjel18iow04ack42mbugt8gi4rrouuouo-1c1c4irupo73c66qroamriz16 \

Your docker manager will automatically be a worker in your cluster. You do not need to make it join the cluster.

Create Service

You can create number of replication of your applications in docker cluster. This number can be hundreds or thousands. They can even be scaled up and down easily with single command. As I mentioned in previous article, it also does that across your cluster. You won't need to know which docker service is running on which machine. This is really cool. You just give the replication size or tell swarm to scale it down or up.

➜  ~ docker -H <manager_url> service create --name <service_name> --replicas=10 --env DYNACONF_SETTINGS=settings.test --mount type=bind,source=/etc/hosts,destination=/etc/hosts,readonly=true  --publish 80:9000 <image>

In here I use some arguments which is also in regular docker commands. I will just talk about the arguments especially for swarm.

  • Replicas: This is the number of your replication size. It is 10 in here, it means 10 instance of the image contains your application will be deployed in this service. You may have less then 10 workers, but don't worry, docker swarm will make sure there are 10 are instances running.
  • Mount: Regular docker has also mount feature, but why I want to talk about that is, because mounting is a bit of different from it. Normally you would do that with docker -v /etc/hosts:/etc/hosts, which is not usable with swarm. I found the one above in the example that works for me.
  • Publish: I want to talk about that becuase there is something special for swarm in here. Normally, you can't expose same port for multiple container in a docker engine. But, docker swarm may deploy multiple same container in a single machine,right? So, what happens then, will it expose the same port for those containers? Nope, There is only one port exposing on 80 port which is kind of a load balancing server built-in docker swarm. Port is exposed for that not for your containers. You can't even access your web applications directly. Swarm will do load balancing on that exposed port to the containers in your service which is running in an secret network which is called as overlay network. I will mention that load balancing later.

There are lots of configuration of docker ceate service, you should check it;

docker service create --help
Scaling Service

This is really fun part. Scalability with single command, it makes me feel so powerful :-)

You just do;

#scale it down from 10 to 2
docker service scale <service_name>=2
Service Logs

Unfortunately docker doesn't support this as of 1.12. They have commitment as at least experimental on 1.13. Until that version, I have a workaround that needs to be run on each worker. This is not the best one, you are not going to be tailing all service logs in one place, but this will at least make you to tail all logs of containers of your service in a worker at a time.

docker ps | grep -w <service-name> | for i in `awk '{ print $1 }'`; do docker logs -f --tail=30 $i & done

This is scanning and filtering the containers running for your service and tail their logs one by one as background process.

To stop tailing;

pkill -f 'docker logs'
Load Balancing

This is another thing that I really love about docker swarm. I was worrying about load balancing and service dicovery issue. Because I just tell swarm to deploy my service to where every it wants. I don't know which service task is working on which worker before I run it. For example, lets assume you have an API exposed on 8080 port. And you have 4 workers. And our replication size is 3. Swarm manager deploy 3 of them which means one of your worker machine won't contains the application. So, you can't access the application from that server right? or you think. You don't even know which server is that. Swarm does a magic here. 8080 port on each swarm worker, can be used to access the web application in the service. Even at the server which is not a task of the service running. This is really cool. On 8080 port, there will also be load balancing over tasks running in a service. Accesing to node1:8080 doesn't mean you access a task running in node1. It may be in another node too becuase of load balancing. Swarm doing it across the cluster.

Service Discovery

Service discovery is something you need to achieve when you are dealing with something cloud like. I mean, when number of your application instances is not strict, your instances network attributes are going to change. This even is going to change on each redeployment. Lets assume, you have elasticsearch and a web application communicates with it in your stack.These are different services. You can scale elasticsearch or your webaplication down or up whenever you want. But, dont miss here, how is your web application communicates with elasticsearch? I ask that becuase we don't know where the elasticsearch nodes are runnin at the beginning, we can't define it as some ips in our configuration file of our web application. In here, docker says, with swarm, you can put those services in a special network called as overlay type network. Docker swarm will have common dns for them to make them communicates with each other. Lets name these services as elasticsearch_cluster and my_web_application. So the webapplication will be able to access a hostname elasticsearch_cluster. Web application don't have to care about elasticsearch service. Unless all of elasticsearch tasks are down, webapp will be working properly. I didn't try this yet but they say, swam will handle it.