All Blogs

Introduction to Docker Swarm (Part 1)

Published On  :   January 24, 2024

Share

In this 3-part series, we will learn what Docker Swarm is and how to set up Docker Swarm for production. If you want to deploy your web application to production you need proper systems in place so that it is scalable and fault tolerant. The system should make it easy to apply new updates or hotfixes. We should not bring down the entire website to apply a new update. We need tools to monitor the health of the servers and capture the logs from the application so that we get an idea as to how the application is functioning. Docker Swarm is a clustering and scheduling tool for Docker containers. It allows us to orchestrate our application and makes it easy to add observability tools (like Prometheus, Grafana, …) to our production environment. In this article, we will learn how to deploy a simple nodeJS application using Docker Swarm.

Prerequisites:

Before proceeding with this tutorial, you should have a basic understanding of the following:

  • Docker (23.0.1 or higher)

  • Dockerfile

  • Docker-compose

  • NodeJS v20

To run docker without sudo you can use the following command to add the current user to the docker group. Restart the terminal after running the command.

sudo usermod -aG docker ${USER}

What we are doing in this article:

The following is the list of things we will be in this article

  1. We will dockerize a simple NodeJS application

  2. We will create a docker-compose file to run the application

  3. We will run the application using Docker swarm

1. Dockerize a NodeJS application

Let's start by creating a simple nodeJS application. Open your terminal and run the following command to initialize an empty NodeJS project

npm init

Under the scripts section in the package.json file add the following script to start the application

"start": "node index.js"

Let's install ExpressJS and create an API

npm install express

Create a new file called index.js in the same directory and paste the following content into it:

var express = require('express');
var app = express();

app.get('/', function(req, res){
res.send("Hello world!");
});

app.listen(3000);

Run npm start to start the application. If you go to http://localhost:3000/ in your browser you should see Hello world! . Let’s Dockerize this NodeJS application. create a file called Dockerfile in the same directory and paste the following content into it:

FROM node:20
WORKDIR /app
COPY package.json .
ARG NODE_ENV
RUN if [ "$NODE_ENV" = "development" ]; \
then npm install; \
else npm install --only=production; \
fi
COPY . ./
EXPOSE 3000

Let’s go through what we are doing in the Dockerfile.

FROM node:20

We are using NodeJS 20 as our base image

WORKDIR /app

We are setting the current working directory as the “app” directory

COPY package.json .

We are moving our package.json file into the container

ARG NODE_ENV
RUN if [ “$NODE_ENV” = “development” ]; \
then npm install; \
else npm install — only=production; \
fi

We are using an argument called “NODE_ENV” to install only production dependencies if we are building for production. Since ExpressJS is the only dependency we are having for this project this doesn’t matter, but it is a good practice.

COPY . ./

We are copying the source code from the current directory into the container. The reason why we are copying package.json separately is to reduce the rebuild time of the container.

EXPOSE 3000

We need to expose port 3000 so that the server can listen to incoming requests.

2. Creating a docker-compose.yml file

Let’s create a docker-compose file so that it is easier for us to run the application. Create a file called docker-compose.yml and add the following code to it:


version: "3.9"

services:
    nodeapp:
    build:
        context: .
        args:
            NODE_ENV: production
    ports:
        - "3000:3000"
    command: ["npm", "run", "start"]

Let’s go through the code. We have a service called nodeapp . It will automatically pick up the Dockerfile from the current directory.

context: .
args:
NODE_ENV: production

We need to mention the build context as the current directory and we are passing the NODE_ENV as production. This will be used by the Dockerfile as we mentioned above (In the ARG_NODE_ENV step).

ports:
— “3000:3000”

Then we mapped the port 3000 from the container to the port 3000 in the host machine. We are instructing docker to run the command npm run start to start the application.

Before we build our Dockerfile, it is a good practice to reduce the amount of files that we add to the Docker image. Let’s create a .dockerignore . Through this file, we can tell Docker to ignore the files and folders that we don’t want to add to the build. Paste the following into the file:


node_modules
Dockerfile
.dockerignore
.git.env
.gitignore
docker-compose*

To test you can run the command :


docker compose up --build

If everything goes well, the output should look something like below. If you are running this for the first time the output might be different.


$ docker compose up --build
[+] Building 2.0s (11/11) FINISHED
 => [internal] load build definition from Dockerfile
 => => transferring dockerfile: 247B
 => [internal] load .dockerignore
 => => transferring context: 2B
 => [internal] load metadata for docker.io/library/node:20
 => [auth] library/node:pull token for registry-1.docker.io
 => [1/5] FROM docker.io/library/node:20@sha256:9aa3de5470c99408fda002dc1f406e92a31daf0492eb33d857d8d9d252edcc52
 => [internal] load build context
 => => transferring context: 39.94kB
 => CACHED [2/5] WORKDIR /app
 => CACHED [3/5] COPY package.json .
 => CACHED [4/5] RUN if [ "production" = "development" ];         then npm install;         else npm install --only=production;         fi
 => [5/5] COPY . ./
 => exporting to image
 => => exporting layers
 => => writing image sha256:2f4889c63ee4816d1b17ef08ba085f75eab76c236d471394ce1ae3412214940e
 => => naming to docker.io/library/docker_swarm_example-nodeapp
[+] Running 1/1
 ⠿ Container docker_swarm_example-nodeapp-1  Recreated
Attaching to docker_swarm_example-nodeapp-1
docker_swarm_example-nodeapp-1  | 
docker_swarm_example-nodeapp-1  | > docker_swarm_example@1.0.0 start
docker_swarm_example-nodeapp-1  | > node index.js
docker_swarm_example-nodeapp-1  |

If you go to http://localhost:3000/ in your browser, the application should function in the same way. You have successfully Dockerize a NodeJS application.

3. Integrating Docker Swarm

Since we have a working docker compose file. It is easy to add the configurations for Docker Swarm.

To test docker swarm we need to publish our docker image to a repository like Dockerhub or Amazon ECR . Since we are testing it in a local environment there is an easier way to publish our image by running a local instance of a service called registry:2 . We can start the registry using the following command:

docker service create --name registry --publish published=5000,target=5000 registry:2

The registry will run in the port 5000. Let’s update our docker- compose file and add an image property. Immediately after the service name add a section called image and add 127.0.0.1:5000/nodeapp as shown below:


\nodeapp:
	image: 127.0.0.1:5000/nodeapp

Now in your terminal build your docker image using the following command:


docker compose build nodeapp

This will pick the docker-compose.ymlfile and it will identify the nodeapp service and build the container.

Then run the following command to publish your image:

docker compose push nodeapp

This will check the image section of the nodeapp service and will publish the image to the repository.

Adding the deploy section:

We can add the configurations for Docker Swarm to our docker-compose.yml file under a section called deploy . Paste the following to the nodeapp service:


deploy:
replicas: 1
restart_policy:
condition: any
update_config:
parallelism: 1
delay: 15s

        

Let’s go through the configuration:

replicas: 1

We are telling Docker to create one instance of the nodeapp service. This is only the initial configuration. We can also add more replicas based on our needs.

restart_policy:
condition: any

We are telling Docker to restart the container in case the service fails and without any condition.

update_config:
parallelism: 1
delay: 15s

Under the update configuration, we are instructing Docker on the number of containers that can be updated parallelly in case we want to deploy an updated version of our application. This will ensure that we don’t bring down the entire application whenever we are pushing a new update.

The final docker-compose file will look like the following:


version: "3.9"
services:
nodeapp:
image: 127.0.0.1:5000/nodeapp
deploy:
replicas: 1
restart_policy:
condition: any
update_config:
parallelism: 1
delay: 15s
ports:
- "3000:3000"
build:
context: .
args:
NODE_ENV: production
command: ["npm", "run", "start"]

        

Before we test the above configuration there are a few key concepts in Docker Swarm that we need to understand.

Services :A service in Docker Swarm can run multiple containers that use the same image.

Stack :A Stack consists of multiple Docker Services.

Nodes : A node is an instance of the Docker engine participating in the swarm. You can run one or more nodes on a single physical computer or cloud server. In production, a Swarm will comprise multiple physical and cloud machines distributed across a network. There are 2 types of nodes: Manager nodes and Worker nodes. The manager node delegates tasks to worker nodes. A Swarm cluster can contain multiple manager nodes and multiple worker nodes. A manager node can do anything a worker node does. A single node can run multiple services.

Tasks :Tasks are sets of instructions that run in a manager or worker node. This includes creating or removing services. By default, the services will be randomly distributed between the nodes. However, we can set rules on which node a particular service should get deployed to.

Load Balancing:The swarm manager uses load balancing to expose the services outside the network. You can specify any unused port. Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry. Using this the services can communicate with each other.

You can read more about these concepts in the documentation . To run the application using Docker Swarm we need to enable swarm mode on our machine. Run the following command to get the IP address of the machine:

hostname -I | awk '{print $1}'
        

Run the following command to enable docker swarm. Replace “Private IP address” with the IP address you received from the above command.

docker swarm init --advertise-addr 	<public ip address	>
        

After you run the command it should output another command like the following:

docker swarm join --token SWMTKN-1-17pa6y2g3e128rza7am8j3iucmmkqj9pasdewwe66d9pv5ka8i-9cra1kdy24ptp4tatx7ftg3ww 10.10.11.153:2377
        

We can use this command to add a worker node to this docker swarm. To keep this tutorial as simple as possible we will be using a single node. Run the following command to deploy the stack:


$ docker stack deploy -c docker-compose.yml node_stack
Ignoring unsupported options: build
Creating network node_stack_default
Creating service node_stack_nodeapp

        

We can use the docker stack command to deploy multiple services at once. We need to specify the compose file that Docker needs to deploy the stack. You will see the following output:


$ docker stack deploy -c docker-compose.yml node
Ignoring unsupported options: build
Creating network node_stack_default
Creating service node_stack_nodeapp

        

This will create a default network and a service called node_stack_nodeapp . The service name is a combination of the stack name and the service name that we specified in the docker-compose.yml file. To check whether the service is running, you can run the docker service ls command as shown below:


$ docker service ls
ID             NAME                 MODE         REPLICAS   IMAGE                           PORTS
nkb3izhmuz6k   node_stack_nodeapp   replicated   1/1        127.0.0.1:5000/nodeapp:latest

        

You will get details regarding the service like the number of replicas and the image that was used to deploy the service.

If you run docker ps command, it will show the container that is part of this service.

You will notice that even though we are running our application in Swarm mode we can’t access our application through http://localhost:3000 . This is expected. We will need to integrate a proxy server or a load balancer to redirect incoming requests to our service. We will explore that in the next article.

You can run the following command to remove the stack:

docker stack rm node_stack
        

You can refer to the entire source code in this GitHub repo.

author dp
Dinesh Murali
Lead-Technology

Docker Swarm is a container orchestration tool for clustering and scheduling Docker containers. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system.