Categories
Golang

Running Sonar from Bitbucket pipelines on your Go apps

In the past when I was a Java developer we would run Sonar on our projects for static analysis. I have always liked the dashboard view it provides and the way it can find all sorts of problems in a code base that are often overlooked. When I learned that Sonar supported Go I knew that I would eventually integrate it into our environment. Since I had already built out our continuous integration pipeline in Bitbucket, I figured it would be easy to integrate Sonar into our builds. Little did I know that there wasn’t much documentation out there on the internet showing how to do so.

I knew I need to run the sonar-scanner-cli against my project, but the only example I could find was this helpful blog post here. I started out with that approach but had an immediate problem. Before I integrated my sonar step in my pipeline, my builds took about 4 minutes from the time a feature was merged into master until the software was running on Kubernetes in the cloud. After I followed that blog post my builds were taking an extra 3.5 minutes. Given that you are billed based on build minutes and the amount of code we ship this was going to be unacceptable for us as we were going to chew through too much build time.

Setting up your pipeline

Before we go any further though let’s setup a build pipeline for go. My post assumes that you are using Go modules as it makes the pipeline much simpler. My initial pipelines were built before I modularized my code and there was more setup. Given that in Go 1.14 modules are the default I am not going to document the old GOPATH approach.

Start with the top of your pipeline file and you will define it like so:

image: golang:1.14.1

definitions:
  caches:
    go: $GOPATH/pkg
    sonar-cache: .sonar

We will use the golang docker container as our base container for our pipeline. We will then define caches for our Go module dependencies and our .sonar directories. This will speed up subsequent builds and as mentioned above in bitbucket pipelines time is money.

Build Steps

Once we have that defined we define our build steps we will look at the main two steps for our objective here:

 steps:
    - step: &build
        name: Build the app
        caches:
          - go
        script:
          - git config --global url."git@bitbucket.org:".insteadOf "https://bitbucket.org/"
          - go build
          - go test -cover --coverprofile=coverage.out ./...
          - go vet ./...
        artifacts:
          - coverage.out
    - step: &sonar
        name: Sonar code analysis
        caches:
          - sonar-cache
        image: sonarsource/sonar-scanner-cli:4.3
        script:
          - export SONAR_LOGIN=$SONAR_API_TOKEN
          - export SONAR_PROJECT_BASE_DIR=.
          - /opt/sonar-scanner/bin/sonar-scanner -Dsonar.login=$SONAR_API_TOKEN

The first thing we do is a git configuration. This is because if you use libraries in a private bitbucket repository go get by default will fail authenticate. This tells it that any requests to bitbucket should be by ssh instead of https. You will then need to have ssh keys configured on your pipeline.

The next thing we do is build our code and run the unit tests. We use the built in Go code coverage and output the coverage to a file. We also run go vet which is great for finding issues in your code. At the end of that build step it saves the coverage file for use in later steps and also saves the Go cache so that all the dependencies that were downloaded for your build don’t need to be downloaded for each run.

The second step above is the sonar step. Since downloading the entire cli tool each time was too slow, I figured I would start with the docker container published by sonar source. This loads much faster than copying the tool down. Once that image starts up we set some environmental variables and then invoke our scanner. This setup assumes that you have created an API token for your sonar instance and that you have configured it as a pipeline variable in bitbucket.

Sonar config

There is just 1 thing missing now and that is our sonar config. In the root of your project create a sonar-project.properties file to configure your settings. Mine looks similar to this:

sonar.projectKey=haskovec:MyProject
sonar.projectName=MyProject
sonar.host.url=https://sonar.haskovec.com

sonar.sources=.
sonar.exclusions=**/*_test.go,**/vendor/**,**/mocks/**

sonar.tests=.
sonar.test.inclusions=**/*_test.go
sonar.test.exclusions=**/vendor/**
sonar.go.coverage.reportPaths=coverage.out

sonar.sourceEncoding=UTF-8

Here we configure a project name and key and point to the URL of our sonar instance. Then we configure which sources to scan and which to exclude and which tests to scan. We also configure our code coverage file here. Once we have this we can finish our pipeline. Once we have our steps declared we have our pipeline run on master as shown below:

pipelines:
  default:
    - step: *build

  branches:
    master:
      - step: *build
      - step: *sonar

This says whenever anything happens (someone pushes a commit to a branch) we will run our build step. We then setup a branch specific rule that whenever anyone merges code into master we will build our code and run the sonar analysis on our code and publish it to our sonar server.

Conclusion

That is all you need to do to get a very basic pipeline up and running on bitbucket which will build your Go app and run sonar against all new features merged into your master branch. I hope this helps and saves you a bunch of time as it took me a while to figure out how to get it running well.

Categories
general

Cobertura is gone and Clover is here

I have spent most of this week working on integrating Clover into our environment and ripping out Cobertura. I ran into a couple of issues along the way, but we are up and running now. First one thing I dislike about Clover is by default they will mess with the maven artifacts that you may intend to ship. I think this is actually a poor way to instruct people to configure it out of the box because you are basically saying you only run it every so often on different builds or you end up having to invoke maven multiple times or other associated hacks. I didn’t like any of those options as the idea is to fail the build if coverage drops below the acceptable level and not accept the commit until that is addressed. Luckily I stumbled upon the clover2:instrument option that you can use instead of the default recommended clover2:setup goal. But then I hit a second problem, the way it names the instrumented classes with the clover2:instrument option seemed to be clashing with the JPA 2.0 Metamodel generator that we were using. I had sort of been looking for an excuse to rip that whole thing out of the project for a while and now I finally had that so I removed it from our software and replaced it with just reflection on the classes and used unit tests to verify at test time that the code wasn’t broken instead of the compile time checks we would get with the metamodel. With that gone clover integrated greatly and I got it wired into our Jenkins configuration. Today I was able to get our configuration manager to install the clover plugin into Jenkins instead of using the publish html report option and we have much nicer integration. With the Sonar Clover plugin we now have integration with Sonarqube. The Sonar plugin brings in the coverage but it no longer lists technical debt like the Cobertura does. So aside from that I think this is going to be a much better solution for us going forward and was glad we could finally switch.

Categories
general

Project Estimation

The thing I dislike most in software development is when they ask me to estimate how long a given project will take. I am about to start a new project so of course the first thing that is asked for is to do some research and try to figure out what the high level tasks of the project will be and estimate how long they will take. This seems like a reasonable thing to do as obviously if the company is going to invest a lot of money into a project they want to have sort of a guess how much the project is going to cost. Additionally if the scope of the work is outside the time frame in which they need the feature they can decide whether or not to limit the scope of the project or add resources to the project. So all in all I can see the need and the point of it, but I think I dislike it cause I am not very good at it.

The first project I led at my current company I came up with a bunch of estimates and actually did a pretty good job of identifying the major areas of work that needed to be done. I went through and applied my time estimates and based on the features I felt I understood very well I delivered fairly tight estimates and the features I had less understanding of I added extra padding for research and learning time. Then I got into the project, and the parts I thought I had the biggest handle on was actually much bigger than I had realized. I had I think 2 weeks of work on one aspect that actually ran like 6 weeks. I believe the whole project was a 3 month project. So of course the project manager was sweating it a little bit. I told him don’t worry I always hit my dates and if I think the date is in danger I will let you know immediately. As we went on the other aspects that I didn’t feel like I understood as well turned out to be easier than expected and I made up the time there. By the end of the project I delivered on the exact date I had promised 3 months previously and I didn’t put myself into a death march so I considered that a successful project. From an estimation point though maybe it was a failure as all my estimates were off even though I delivered what they wanted when they wanted it.

So here I am again working on an estimate for a new project wherein the date is already known. I guess at this point my thinking is make sure I have a decent enough understanding of the project so that I have the resources to hit the date, and hopefully the experience of that first project will help me to not be too aggressive on the parts that I think I understand as there are probably some icebergs and also not too lax so that at the end I deliver on the date we need it or a week or 2 early and have enough resources to do so that I am not in a death march. Wish me luck.

On a positive note I resolved all the issues with our new SonarQube server instance and we transitioned to it last Friday. We are now able to use the plugin in IntelliJ to download the data and analyze our local projects which is a big step forward. Additionally running it as one unified Sonar job from the parent pom instead of invoking it on each maven module has resulted in a speedup by 10 minutes on our builds with Sonar analysis and better Sonar coverage overall (Previously some taglib libraries and a few other small things weren’t being analyzed).

Categories
Java

Getting crushed by SonarQube

I have been upgrading our Sonar server from 4.5 to 4.5.2 and restructuring our project. I initially was planning on upgrading to SonarQube 5.0, but the upgrade process can’t seem to handle our database. After I upgraded to 4.5.2, I was restructuring. Initially we had each of our libraries setup as a separate project at work and there was a separate sonar project for each one. At one point we decided it was much better to consolidate them all under 1 git repository and make 1 maven master pom with each other project as a module in maven. When we did that we never got around to consolidating our Sonar project to 1 project with sub projects. After we upgraded to intelliJ we found that we couldn’t sue the sonar plugins to integrate with our environment as our project didn’t match our sonar project.

Hence I started working on restructuring it to reflect our current project structure. Of course me being me the first thing I want to do is update to the latest. After the Database Schema Upgrade to version 5.0 failed I restored from a previous backup and then did the upgrade to 4.5.2. After upgrading I also had to upgrade many of our plugins. Upon completion of that I ran the analysis and started working on fixing the new errors. I was getting pretty close to having all the issues fixed when I discovered many of the rules we were using were deprecated. We had 99 deprecated rules plugins so I disabled them and enabled the suggested replacements. Oh what a mistake, after being down to about 60 issues to fix that put me up to 1000. Ay!!! At the end we will have much better rules in place for our code, but after working on it all day today and not quite resolving all the issues I am sort of kicking myself for upgrading too much at once. Oh well I guess in the end it will be worth the pain.