CI/CD with Jenkins on Dockerlab

Have you ever wanted to create your CI/CD pipeline at home?

Fabrizio Waldner
8 min readAug 29, 2020

I was developing the project web-infrared (https://github.com/fabrizio2210/web-infrared) and I decided to create a reliable environment where develop. For reliable I mean that I can write new code, even in distance of months, without breaking the old features.

web-infrared is basically a web service that exposes API and a web page to control my television, you can find more information in this article.

Cloud solutions, like Travis CI, are too limited when you develop for ARM architecture or if you want to test your code on several containers concurrently (especially if you use the free version). I like Raspberry Pis, but their architecture (i.e. armv7) is not commonly used in the cloud. And even developing with my laptop (x86_64) can be a problem without tool to transcompile. Whereas installing Jenkins directly Raspberry Pi opens me a lot of scenarios.

Development cycle

Before start to create the CI/CD pipeline I had to figure out how the service is structured.

The web remote control is composed basically by two parts:

  • Platform with middleware: LIRC subsystem, Nginx, Uwsgi…
  • Python scripts: my source code on top of the platform

The platform can be viewed as a package wrapper, whereas the source code as the package content. This subdivision is important because I realized that I can develop the content without the wrapper. Obviously, at the moment of the deploy, both components are equally important.

Developing

To develop just the content, I could exploit the characteristic of Python being platform independent and Flask that can be executed standalone without a webserver. Moreover, I rely on some assumptions: the middleware like Uwsgi and the webserver Nginx are not interfering with the develop. These assumptions are quite strong, but they are acceptable in my case because it depends where you want put the division between content and container. This concept is explained in the next two figures, where the boundary between content and container is different.

First scenario: it is the current developing cycle
Second scenario: if I put the middleware in the developing cycle

As highlighted in the second image, if I used a special feature of Uwsgi like shared memory, I would have to move Uwsgi from the container part to the developed content.

In my opinion, developing means exploring. Before beginning code developing, I explored how the subsystem should be configured and, then, I decided to block on this configuration. For this reason I can focus on the code developing, but in theory this is not correct, also the subsystem configuration should have its development.

Focusing more on the practical aspect, I use a Vagrant machine on my laptop in order to have a separate and responsive environment. The virtual machine (VM) is set up by the playbook ansible/lib/setApp.yml which basically installs the Python environment. And then, the Flask debug should be executed.

Crafting it in a shell script, a developer can simply run:

user@host:~/web-infrared$ vagrant-tools/dev-web.sh

The developer can point the web browser to the webservice exposed by the VM (i.e. in the below example http://192.168.121.161:5000/) to have a direct feedback on what he is doing.

+ vagrant ssh -c 'cd /opt/web-infrared/ ; source venv/bin/activate; python app.py'
* Serving Flask app "app" (lazy loading)
* Environment: develop
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://192.168.121.161:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 187-348-627
* Detected change in '/opt/web-infrared/app.py', reloading
* Restarting with stat
* Debugger is active!
* Debugger PIN: 187-348-627

This Python environment in the VM is used just to develop. During the testing part, as explained later, the environment is created by another script and encapsulated inside a DEB package.

In order to test the development environment, it is possible to execute this script:

user@host:~/web-infrared$ vagrant-tools/test-web.sh

It executes the unit tests against the code (a deeper explanation is in the testing paragraph).

Maybe this script can be put in a git hook because it should be executed before push the code, avoiding waste of time waiting all the CI/CD pipeline taking place.

When developer is satisfied from the visual feedback on the VM and the new code does not break the unit tests, he can push the commits triggering the CI/CD pipeline.

The new code is checked immediately against the vagrant environment

CI/CD pipeline

The CI/CD pipeline is composed mainly by two parts: the testing and the deploy. Actually before them, there is the building stage: the creation of the artifact.

The two components are tested and deployed

All the pipeline is implemented with Jenkins, you can find inside the Jenkinsfile all the operations. Jenkins is running inside a container on my Dockerlab infrastructure.

Screenshot taken from my Jenkins

Building

BuildDocker: in this stage are created the images to build and test the code used in the next stages. Actually I use two different Docker image. The first one is the controller , while the second one is an almost a vanilla image as the target. Using Docker containers has some advantages: it is very light, it can run on the Raspberry; it is an isolated environment, no problem executing the Ansible playbook.

BuildVenv: Initially this stage did not exist because the Python code does not need to be compiled. But the virtualenv does. In fact the Uwsgi libraries are compiled at the installation time. A little optimization here, the creation of the virtualenv is triggered when the requirements.txt file is changed, otherwise it uses the cached binaries (collectVenv). To build Uwsgi, it is done on Raspberry Pi Zero that has armv6l architecture. If I compile on Raspberry Pi 3 it doesn’t work on Raspberry Pi Zero, but vice versa works.

BuildDEB: in order to reduce the compilation time and have consistency in the deploy, I preferred to create a DEB package that includes the virtualenv and the source code. Even in this case not always is triggered (e.g. only Ansible part changed) and the DEB package is cached (collectDEB).

Testing

The testing stage should be done on similar environment of the final target. Unfortunately, I don’t have two Raspberry Pi Zero, therefore I tried to have a quite similar environment using a Docker container on a Raspberry Pi 3, where Jenkins is running.

This approach, although it is light, has some disadvantages, especially for the middleware subsystem. Indeed, it is not possible to test the led emission, or simply it is not possible to have a working LIRC configuration. This creates a misalignment, for example calling the irsend executable in Docker container gives a failure while in the final target works (I hope).

By the way, I have to believe the subsystem works and I can test just the code (TestCode) part and the Ansible playbook syntax (TestAnsible).

In my case testing the code means unit tests. Inside the class TestAPI I put the asserts as methods. A method is, for example, retrieve the homepage page and verify it returns 200 OK.

class TestAPI(unittest.TestCase):
def setUp(self):
app.app.testing = True
self.app = app.app.test_client()
def test_root(self):
rv = self.app.get('/')
self.assertEqual(rv.status, '200 OK')

Whereas the configuration playbook is checked executing it against an empty container and executing the test class TestINFRA. The good execution of the playbook is a good start but it is not enough. For example if in the playbook a step installs a package and another remove it, the play is OK, but maybe it can be a mistake. So the TestINFRA class is necessary to verify the state of the system at the end of the playbook. To be precise, also the idempotency should be tested, executing the playbook twice and verifying a zero changes.

The Ansible playbook execution is the longest part in the test chain (I’ll discuss later about improvements).

Deploying

The DeployConfiguration stage is similar to the TestAnsible stage but with a different goal: it is against the production environment (i.e. Raspberry Pi Zero) in order to deploy the release. The Ansible playbook, in addition to the configuration, installs the the DEB package and restart the service if necessary.

Improvements to do

Include configuration part in a Docker image

From my Jenkins pipeline I can see that the longest part is the Ansible one. At the moment the Ansible part includes the system configuration and the installation of the DEB package. With the introduction of Docker container, the system configuration (Nginx, Uwsgi…) can be viewed as a package similar to the DEB that can be cached and installed on a Docker host. Obviously and unluckily, the part related to the Raspberry wiring (i.e. the dtoverlays, explained better in the other article) can not be brought inside the container. The most effective porting that can be done is to setup the dtoverlays externally (touching only the /boot/config.txt) and mount inside the container the /dev/lirc1 special device.

Using the the Docker image should improve the speed because it is highly cached and the build of a layer is triggered just when necessary and the same image can be used for testing and deploying.

Using better the Docker Swarm

At the moment both the containers (controller and target) are executed on the same host because it is the simplest configuration in Jenkins Docker plugin. I should find a way to distribute the container in the pool thus better using all the nodes in the Docker Swarm.

Dividing static part from the dynamic one

At the moment all the pages are served by Flask. Some content are not dynamic, for example the CSS or the images. These resources should be served by Ngnix because it is more efficient. This change complicates the development because in the Vagrant currently Nginx is not installed and Flask is standalone.

Conclusion

I am satisfied about how I have set up the project. A lot of effort was done to achieve this development configuration but I learned a lot. The work is not finished yet because some improvements can be taken in place and if I change the infrastructure (i.e. dividing static parts from dynamic parts) some other adaptations should be made.

In this project I have seen that to have a comfortable and reliable development environment, a lot of energy should be spent in the developing platform and not just in the code.

--

--

Fabrizio Waldner

Site Reliability Engineer at Google. This is my personal blog, thoughts and opinions belong solely to me and not to my employer.