If you prefer the reading experience ofMedium you can follow us there as well.
Through many years spent developing various applications, using various technologies and working with different clients, we have tried a number of diverse development methodologies and techniques, in search of the most optimal development process. But first, we had to define what optimal means for us.
Some would argue that the optimal process is the one that produces the required output (working functionality) the fastest. While that might be the case for small, simple projects, we find that on a large enterprise scale, the fastest output does not usually produce an optimal result.
Fast output requires cutting corners, duplicating code (instead of using an abstraction layer), not thinking about future requirements and possible issues. As a consequence such code usually does not comply with best practices and coding standards.
In our experience, all those things would later lead to many hours spent on refactoring and fixing problems. On the other side, enforcing good practices and code reviews from the start, leads to good, error-free code, fewer problems, easier maintenance and faster development in the long-term.
In the following, we will describe the methodologies, tools, and setup we use today in a typical project.
Continuous integration is the practice of merging, building, deploying and testing the code continuously, often multiple times a day. It requires some time investing in the initial setup, but once you get going, it moves very smoothly.
Continuous integration requires having a revision control system for source code, automated builds, testing, and deployment.
There are many articles discussing pros and cons of continuous integration so we will not elaborate on that topic here, instead, we will show how we apply it to our projects.
Below is a list of applications, technologies and services which help us maintain a successful DevOps story. Details about usage and why we chose some of them are described later.
In short, developers work on feature branches, which are reviewed by team leaders and other developers, and then merged into the sprint branch. After each commit, unit and integration tests are executed on Jenkins. After the merge, the application is built and deployed to test instance, while code analysis is done using SonarQube, and after front-end selenium tests are executed successfully, the application is deployed to stage instance.
Let’s review the process in detail.
Git Branching Strategy
We are using Git as a version control system, and we use the following branching strategy:
- master branch – a branch containing the code ready to be delivered after each sprint
- sprint branches – branches for each sprint
- feature branches – branches for every story being implemented inside a sprint
When we start a newsprint, we create a sprint branch, and for each story, we create separate branches. If, for example, there is a common service being implemented for multiple stories, we create a separate branch for it, in order to minimize development dependencies between stories, and developers using it can merge that branch into their stories even before it gets reviewed and merged into the sprint branch.
This process basically requires creating a lot of small branches, and in the beginning, there was some resistance, because developers were not used to working that way. However, after a month or two, everyone could see the benefits and started thinking in advance about what the dependencies are, what to implement first and how to avoid potential problems.
Code Quality Assurance
In order to produce good, well structured, maintainable, error-free code, we have enforced three main principles:
- Code Review
- Unit and Integration Testing
- Static Code Analysis
Before implementing the current process, we used to do code reviews occasionally, once a sprint, or once a month, or whenever time allowed us, and we weren’t using any specific tools, only reviewing last commits. As it turned out, after some time, as code grows larger, it was becoming impossible to track, review, and correct the mistakes or things that weren’t implemented correctly.
We started to look for a tool that could serve our needs. We explored various popular tools to find one that suits our needs the best. We didn’t like tools that require code review for every pushed commit, e.g. Gerrit, since, in our opinion, it generates too much overhead. We wanted to be able to review code for each feature when it is completed, and we found that GitLab and its merge requests are what we needed.
The process is pretty simple, you create protected branches (e.g. sprint branch, master branch, delivery branch), and developers can create and push their own feature branches. Once they finish working on a feature, they issue a merge request from the GitLab UI. Before merging team leaders or other developers review the code using GitLab, using user-friendly code comparison, with the ability to make comments on sections of code. The developer who made the change gets notified about comments, and can reply or update the code if necessary.
In the end, every piece of code gets reviewed before a merge, and it turned out that many potential problems were caught during this process, many bad practices were eradicated, and the quality of code we were producing improved drastically.
Of course, the code review process was the second major barrier. in the process. In the beginning, we had a lot of pushbacks from the team as we tried to improve code quality, to apply design patterns where necessary and to obey coding standards. Soon we found out that as time went by code started to improve and developers realized that it’s better to write good code once than to get it back for refactoring multiple times. In the end, bad reviews started to decrease significantly, and code quality was getting better and better. We had achieved exactly what we wanted, to produce good, maintainable, clean, testable code, and consequently, that has become second nature to our developers!
Of course, we could not rely only on code review to assure the code quality. People make mistakes, fail to see some issues, so we got a tool to help us – SonarQube, which is described later.
Unit and Integration Testing
Although if they can at the time seem to be tedious, we find Unit and Integration tests as one of the major tools for assuring code correctness. Initially, it seems silly to test simple methods, but the benefits are visible later when a change is made on some part of the code. Developers usually don’t see all the side-effects the change produced. In most instances, the problem is caught straight away when either a developer or Jenkins execute the tests.
Unit tests provide a great way of testing edge cases, and part of the code review process is to assure that edge cases are covered in tests. We tend to have at least 70% of code covered in unit tests.
For the Integration tests, the situation is a bit different. They are usually very time consuming for development staff, and we enforce them only for testing complex business logic. They require preparing test data and that is usually done with the help of our quality assurance staff.
Static Code Analysis
As previously mentioned, aside from code review, assuring the code quality requires static code analysis.
We do this with the help of SonarQube. It allows us to setup code style standards, to detect possible problems, and to enforce test coverage.
You can specify Quality gates for many aspects, for example, the percentage of the lines that are covered with code and/or comments or how many critical, or blocker issues are allowed.
All issues are generated and assigned automatically to the person that caused the issue. They are classified as Blocker, Critical, Major, Minor and Info, each based on the severity of the problem. To detect issues, SonarQube uses rules. There are predefined rules for many languages, but you can import or create your own, turn off existing ones, or change their priority based on your preference.
Examples of rules used to create issues are:
- Duplicate blocks of code
- Potential SQL injection
- Catching an exception without logging or throwing it
- Too complex classes
- Unused methods or variables
As an overview, SonarQube has a project dashboard where you can see the project overview, number of issues, test coverage.
One of the crucial things in the setup of build tools is to define environments and a separate build task for each environment. The output of the build process for every environment produces a different application setup.
The most common differences between environments are:
- Configuration setup (e.g. database connection properties, host URLs etc.)
- Development and test builds with verbose logging turned on
Following this approach, we are able to run the application on the developer machine as well as deploy it to test and production servers at any time.
Putting it all together
Naturally, it would be impossible to maintain this process without automation. We set-up the entire process using Jenkins as a build environment. We set everything up using Docker containers so that we can easily migrate or scale any part of the system. In the following example, we used two Docker hosts.
The first one was running the main Jenkins instance, and the second one had two Jenkins slaves, one for running Maven builds and another one for Node projects.
We set up the main Jenkins instance to get notified by Git hooks when anything gets pushed to the GitLab repository. They trigger a build job on Jenkins, which compiles, builds and runs unit tests for the particular project. In case of feature branches, for any problem detected in the process, relevant information is displayed for the problematic commit in GitLab. Additionally, SonarQube analysis is executed to analyze the pushed code.
For certain permanent branches, like master and sprint branches, Jenkins creates deployable Docker images which are started on one of the Docker hosts for additional automated (e.g. Selenium tests, Angular end-to-end) and real quality assurance tests.
For backend applications where we have an exposed API, we added build tasks which generate a Swagger definition file based on exposed REST endpoints. After each successful merge into the current sprint branch, it is deployed to our Swagger instance, so we always have up-to-date API documentation.
Following the process described above, we have significantly improved the quality of the code coming out of every sprint and reduced the number of issues and occasions where we need to do refactoring in later stages of the product development. Additionally, we are sure at every point that the code is covered in unit-tests, and that all tests pass.
Further, we always have a version of the application ready to be delivered and deployed to production. It is already built and tested in our environment, and there are no hiccups that can occur unexpectedly.