DockerLab

Fabrizio Waldner
8 min readMay 18, 2019

--

An infrastructure of distributed systems where you can run your containerized services.
https://github.com/fabrizio2210/ansible-DockerLab

Schema of the infrastructure

Introduction

I am Unix system administrator and, as a lot of people in my field, I treasure a project as a baby at home. This project is important for me because it aims to create a supporting infrastructure to test and run other services/projects. So it let me unleash my creativity.

In fact, I installed this configuration on my 3 Raspberry Pis and on them I run several websites, reverse proxies and monitoring tools.

All is started from these 4 points:
- I didn’t have a NAS or some servers where virtualize VMs;
- I discovered Docker;
- I like to have a live environment at home in a distributed systems fashion;
- I’d like to have horizontal scaling.

All the infrastructure is build up with an Ansible playbook: setDocker-on-Raspberry.yml

If you want get your hands dirty with this project or find more usage details, read the Readme of the Github project.

Goal

I would like to have an environment, like a black box with an virtual IP, where I can deploy Docker stacks with persistent data. It must be flexible with the number of available nodes: it has to be tolerant to node failures and new nodes addition.

Used technologies

As well as curiosity, I designed this infrastructure to learn about some new tools/paradigms. Indeed, with this project I learned about:

  • GlusterFS: how to setup it and what are the pros & cons of using this type of distributed filesystem;
  • think how to share the persistent data: GlusterFS is not always the best solution;
  • design Docker stacks that can work in a dynamic environment;
  • setup and use Traefik: it let me have more websites on the same port in a dynamically way;
  • setup Keepalived to have a virtual IP;
  • Systemd with its units dependencies: very powerful to handle the bootstrap of the components.

Why Docker?

Docker is very light compared to an hypervisor and on Raspberry Pi is almost impossible to have an virtual environment. Moreover, the Docker universe has plenty of ready images that you can use out of the box. Recently, it has been even added the ARM support for a lot of images.

Why Raspberry Pi?

Raspberry Pi is for definition the Single Board Computer (SBC) for techie people. On his side has low current consumption and low buying cost. The playbook works even on other platforms (like x86_64). As proof of that, the test/develop was made on Vagrant on my laptop.

DockerLab on real hardware

Principles

Resiliency

All systems are fallible, Raspberry Pi is more fallible than other systems. It is easy to have an SD card corruption if your SBC is running all nights and days.
The cluster should be at least composed by 3 nodes, otherwise the GlusterFS and Docker can not have the consensus quorum.
In this project it easy for me to restore the number of available nodes: you just have to prepare a blank Raspberry Pi and run the playbook again.

Network resiliency

Docker Swarm comes already with a Swarm Routing Mesh (https://docs.docker.com/engine/swarm/ingress/). It allows to point any node of the cluster and use it as an entrypoint, in fact the request will be forwarded internally to the right container.
I used Keepalived to have a common virtual IP that stays on one healthy node. The node health is checked verifying the presence of Docker daemon.

Storage resiliency

To have a distributed storage, I chose GlusterFS. I used USB, but I found it very slow. It suits well for configuration files that don’t frequently change. While, when I placed the MySQL files for Zabbix on it, all the performances degraded because the Gluster processes did a lot of micro writing. So the CPU of all Raspberry Pi made a lot waiting for I/O, the user experience of the websites became awful. In this case, the solution was to put the database files on a NFS share abandoning the high availability.
Now I am exploring the use of SSD disk instead of USB pen drive, I will report my thoughts in a different article.

Scaling

This project was designed for Raspberry Pi, so I kept in mind that the resources of one are limited, you can not vertically scale. You can scale horizontally only, so if you want more RAM or CPU you have to add another node. Please note that the memory that can be allocated to one process is limited by the maximum RAM of the single node (1GB in case of Raspberry Pi 3b).

Problems and caveats

Dependencies

The services in the Docker containers can not start if there is no storage. This can be translated in: I have to start GlusterFS before Docker, and the latter one can not start if the first one fails. This organization is possible with systemd unit files. The right use of After and Requires in unit files made it possible.
Another problem that I encountered was the concurrent start of the whole cluster, I will explain better in the next rows. GlusterFS works on client/server paradigm, the client mounts the filesystem from the server. The role of the servers is to keep synced between themselves. In my case the client is on the same node of the server and it connects on localhost. It can be wrongly thought that client always finds the server available. Instead, it is not immediately ready and this is showed when entire cluster is starting (normally after a blackout). To take in account of this delay of the server I had to handle the mount with a script (mount-mnt_cluster.sh) .

Performances

Raspberry Pi born to be a development board, so it very versatile and it can be used for a big variety of projects. The SBC allowed me to test DockerLab on bare metal but I didn’t expect to have a great performance. So I think is useless to talk about performance in absolute manner, but it is better to talk about what implementation can increase the performance.
On this perspective, I am substituting the SD card with an SSD disk. I will report the change of performance in a different article.

Operations

Monitoring

Monitoring is very important to understand what it is happening. To achieve this requirement I installed Zabbix divided in 3 containers: database, server and webserver. I find the stack directly from its official repository https://github.com/zabbix/zabbix-docker.

Portainer

At the end of the playbook, Portainer (portainer.io) is installed. It is a web dashboard that allows to manage the Docker Swarm cluster in an easy way. I find it very useful and almost complete.

Traefik

Traefik (traefik.io) is a reverse proxy for containers. It permits to run several websites on the same HTTP port 80 or 443. It includes ACME protocol, so the effort to have a port exposed with Let’s Encrypt certificate is very low.

Traefik is born for containers, in fact, it has the possibility to detect dynamically (in runtime) the containers that have the need to expose a port. You need only to attach a label to the service that you are deploying:

services:
wordpress:
image: wordpress:latest
deploy:
labels:
traefik.port: 80
traefik.frontend.rule: 'Host:mywp.mydomain.net'

Hands on

The following procedure gives you an example of deploying WordPress in virtual Vagrant environment. The data of MySql and the WWW will be shared across entire cluster on /mnt/cluster filesystem. So, in case of a node failure, the stack will be automatically restored on another node and will find the same data.

git clone --recursive https://github.com/fabrizio2210/ansible-DockerLab.git
cd ansible-DockerLab
./test.sh

Wait some minutes until the virtual machines are up. Enter in one and execute the following commands.

vagrant ssh docker1
sudo -i
mkdir /mnt/cluster/html
chown www-data.www-data /mnt/cluster/html
mkdir /mnt/cluster/db

This is the stack to deploy:

version: '3.3'services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
volumes:
- wp_content:/var/www/html/wp-content
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data:
driver: 'local'
driver_opts:
type: 'none'
o: 'bind'
device: "/mnt/cluster/db"
wp_content:
driver: 'local'
driver_opts:
type: 'none'
o: 'bind'
device: "/mnt/cluster/html"

So enter in Portainer (http://192.168.121.253:9000/) with a browser. The IP is defined inside the file vagrant-groups.list and depends on your vagrant configuration, it should be coherent with the virtual subnet.
Register an user.

Logout and login again, in this way you can define the local endpoint.

Click on Local and then Stacks, add stack.

Paste the stack giving a name. Click on deploy the stack.

Finally, you can connect to http://192.168.121.253:8000/ where the WordPress has been installed.

Failure and restore

You can try the restoration of a node.
When your virtual environment is setup, you can issue the command to destroy one node:

vagrant destroy
docker4: Are you sure you want to destroy the 'docker4' VM? [y/N] Y
docker3: Are you sure you want to destroy the 'docker3' VM? [y/N] n
docker2: Are you sure you want to destroy the 'docker2' VM? [y/N] n
docker1: Are you sure you want to destroy the 'docker1' VM? [y/N] n

Give “yes” only to the first machine, so now your cluster has one node less.
To restore you have simply to run again the playbook:

./test.sh
PLAY RECAP ********************************************************
docker1: ok=56 changed=0 unreachable=0 failed=0
docker2: ok=56 changed=0 unreachable=0 failed=0
docker3: ok=67 changed=3 unreachable=0 failed=0
docker4: ok=62 changed=37 unreachable=0 failed=0

How setup Vagrant

To setup Vagrant on libvirt/KVM/QEMU, I followed these two guides: https://linuxsimba.com/vagrant-libvirt-install https://docs.cumulusnetworks.com/display/VX/Vagrant+and+Libvirt+with+KVM+or+QEMU

--

--

Fabrizio Waldner
Fabrizio Waldner

Written by Fabrizio Waldner

Site Reliability Engineer at Google. This is my personal blog, thoughts and opinions belong solely to me and not to my employer.