Code Coverage

In my current position one of the metrics we track is code coverage for our unit tests. When I started at the company we were using JUnit with Mockito and JaCoCo. This was a pretty good setup we got good coverage reports and Mockito makes the testing writing much easier.

One of the limitations of Mockito is that you can’t mock private methods or static methods. This presented an issue for us in reaching our desired level of coverage. We initially worked around some of the private method issues using reflection, but it wasn’t always ideal. The decision was made to use PowerMock. PowerMock solved all of our Mockito issues immediately. It was compatible with Mockito but gave us some new powerful features to allow us to get much better unit test coverage. Then we ran our Jacoco reports and found that the reporting no longer worked. Due to the way PowerMock uses byte code manipulation in order to mock static methods it is not compatible with JaCoCo and there is no plan for them to support measuring that.

So we figured no big deal and switched to Cobertura. The first problem with this change is that Cobertura 2.0.3 has a regression in it so it won’t report coverage with Powermock. We figured no big deal we will run the 1.9.4.1 release of it until they release a bug fix so we can update. Unfortunately the update has never come. You go to the roadmap for the site and you see it hasn’t been updated since November 7, 2013. There appears to be very little activity on the project, and we haven’t seen an update in 2 years. I see work going on in the github repository but there seems to be no attempt and doing any maintenance or fixing the issues for the existing users, and I can’t get a feel for when the 2.1 release is ever going to come out. A second issue is that the current version doesn’t support Java 8 and I would like to update to Java 8 in the very near future. At what point when dealing with open source software do you say the project is either dead or too inactive for us to rely on for business needs?

Cut to last week I was updating some libraries in the project and I wanted to upgrade from PowerMock 1.5.5 to PowerMock 1.6.1 and my coverage reports went to 0. So it seems our old version of Cobertura can’t handle the latest PowerMock. I did a test with Atlassian Clover and our coverage reports worked perfect and looked better than anything I have ever seen for a report. At that point I decided I had reached my breaking point with Cobertura and put in a request that we move to clover and buy some server licenses and work and in the meantime while waiting for approval I had to settle for PowerMock 1.5.6 until we can get approval to buy Clover licenses.

Going forward when someone suggests a new tool to fix an issue that we are having, I have to say that I am going to be looking into other things that tool drags along with it as I don’t want to be in the situation again where we are trading one problem for another.

Spring 4.1 / AspectJ Progress

My coworker discovered that the new version of AspectJ already has the flags built in to turn off the annotation processing. If we can do that we can continue using the Maven Processor Plugin to generate the Hibernate Metamodel data and not have to abandon this. The problem at this point is the AspectJ Maven plugin doesn’t support passing those flags along to AspectJ. So the next step is to get a patch in to that plugin and hopefully we can make the jump to Spring 4.1 at the start of the new year. After that I am going to focus on updating our container so we can finally make the move to Java 8 at work.

SpringOne2GX 2014 Java8 Language Capabilities

I was fortunate enough to attend SpringOne this year and I attended a talk by Venkat Subramaniam on Java 8. I have to say before attending this talk I have always been sort of meh on the functional features brought into the language, but this really got me excited about them. This is the first talk on functional programming that I have ever heard that wasn’t boring, but really engaged the listeners. I strongly recommend people check it out:

Maven Compiler Plugins, AspectJ, and the Hibernate Metamodel generator

For a while now I have been avoiding upgrading the maven java compiler plugin. We are running 2.5.1 at work. The problem is, in the 3.x version, they seemed to have rewritten it, and it doesn’t want to play nice with the maven-processor-plugin that we used to run the hibernate meta model generator. So far it was like cool, I just won’t upgrade to the new version.

Then AspectJ came out with 1.8.2 and the new AspectJ compiler plugin which also seems to be built like the new compiler plugin. At this point I was like well then I might as well update both since Spring 4.1 wants at least AspectJ 1.8.2. But I still have the whole thing fall apart at that meta model step. I found a flag for the maven compiler about forceJavacCompilerUse but even that didn’t solve the problem for me. A coworker said basically AspectJ seems to be doing what we were using the maven-processor-plugin for and generating the meta models for the entities, so he disabled that plugin. However for some reason instead of dumping the generated files in the target directory it is putting them in whatever directory you are in for the build and we can’t seem to find a way to get it to drop them in the target folder.

So at this point we either need to keep banging our head against the wall on this, or consider rewriting the one place we use the meta model classes to not use them and throw that code out in order to be able to upgrade to Spring 4.1.

G1GC String Deduplication of a simple Spring Boot Webapp

I was messing around with some of the settings in the Java 8 VM. I have been playing around with Spring Boot lately. So I have a minimal webapp in Spring boot, that has a couple of entities, and services and controllers. I have it configured to run as a standalone jar with an embedded tomcat 8 server. When I do a java -server -jar myapp.jar Spring boot launches my app and when it finishes loading the java process is sitting at 870,160K of memory.

Then I launched my app as: java -server -XX:+UseG1GC -XX:+UseStringDeduplication -jar myapp.jar Spring Boot launches the app and when it finished loading it was using 525,076K. That seems like a pretty big savings to me, and I was seeing some even more dramatic results when I was testing over the weekend like a memory size of 900,000K for the first case and 366,000K for the second case. Either way I have to think that if I were to launch a site today on something like Amazon EC2, where the cost goes up quite a bit as the memory rises I would be strongly tempted to try to run on this GC. I know the tests I have seen show this isn’t quite as fast as ConcMarkSweep and some of the other options, but if you are memory constrained I think it would be worth testing to see if this is fast enough. This test was run under a 64bit Java 8 Update 25 VM under Windows 7 (with 8 GB of Ram on the machine).

The road to Spring Framework 4.1

Earlier this year Spring Framework 4.1 was released. I was excited to try out the new features in our project at work and having previously upgraded us from Spring 3.1 to Spring 3.2 and from Spring 3.2 to Spring 4.0 I was expecting this to be another routine Spring update, but alas that was not to be.

One of the first things I do when looking to do a major version upgrade is to check all the dependency versions for the libraries to make sure all of our libraries are new enough to do the upgrade. In this case I discovered that Spring 4.1 requires Jackson 2.x and we were running Jackson 1.9.x.

In theory this isn’t a huge deal. The big breaking change in Jackson 2 is a new code namespace.  So it was basically change some imports fix a few settings where you declare your ObjectMappers and you are good to go. So I made all the said changes and all looked well. Unfortunately a few of our unit tests broke.  It was then that I discovered that one of our data structures being written to Cassandra had a couple of Joda Dates in them that instead of being written out as a nice time stamp string it was actually serializing out the internal structure of the date time class.  And was Jackson 1.9 would deserialize that without issue the 2.x wouldn’t. To make matters worse we had over 40 million records written to our Cassandra database in this incompatible format. Given the size of the data and the inability to take an outage to fix the data structures I needed to come up with a solution to this problem.

At that point I had to revert the changes and they couldn’t go into our build for that week. As it happens I was fortunate enough to attend SpringOne 2GX 2014 so while there I attended a talk about upgrading to Spring 4.1. After the talk I went up to the speaker and explained my dilemma, but he didn’t seem to have any useful advice for me.  I think there was maybe a suggestion about writing a process that deserializes with the old library and then rewrites the record with the new library which is doable given the new namespace. However that solution wouldn’t work for us as we can’t break the functionality while the data is being updated to the new format.

The solution that I settled upon when returning to the office was instead to write a custom Deserializer for DateTime objects. My custom Deserializer understood both the old and the new formats and would look at the content of the JSON and based on that pull the relevant data to construct our DateTime Object. Then on the serialization side I used the Jackson plugin for JodaTime to serialize. So what happens is anytime a record is written in the Cassandra table it is written in the new format and if it is read we can read both the formats. This was the only workable solution I could come up with given our requirements. At that point I noticed with my new change set we were still getting the old Jackson 1.9 libraries in our build as well as the new version. Upon investigation I found that our old 1.2 Cassandra driver was pulling in the old Jackson library. I checked the documentation for the 2.0 driver and it said it was compatible with 1.2 versions of the database. The 2.0 driver did change the way they collect metrics but it wasn’t too large of a change so I made those changes as well so we could finally get rid of all traces of the old version of Jackson.

My code passed all on unit tests and testing I did I my local machine so I pushed it to the code base right after a branch cut to get maximum time for all other developers to run it and test it with me. Everyone ran it all week and it ran great. It was cut in the next weeks build and it sailed through the QA environment. Our staging system ran without a hitch and we were all set to deploy. That week due to unrelated issues the deploy was delayed one night to accommodate some unrelated issues found in the QA process.

And then out of the blue my phone rings at 1am.  It is my boss.  It turns out they are having issues with the deploy. Now for me to be called at 1am means that there have been other issues in this deploy since at that point we are 2 hours into the call. Needless to say this wasn’t a good night from an operations standpoint. Even though this sailed through all test environments in our production system we couldn’t connect to the Cassandra cluster. I was asked whether it was safe to just change the Cassandra version in the pom back to the old version or if we had to revert the entire changeset (including all the Jackson fixes). My first thought in the fog that I was still in from being abruptly woken up is we could just change that lib and then we remembered all the metrics gathering code that had changed. So at that point we had to do a revert of the whole changeset and spin a new build to finish the deploy. At 2:15am I returned to bed and once again we were still on the old Jackson.

It was determined that the Production System was running a different OS version than our Staging Server and that was believed to be the cause of the Cassandra driver failure. So the next week I added back in all my Jackson work, but I left the old Cassandra driver. That finally did it and we solved the first issue we would face on this path to upgrade to Spring 4.1. When I did some followup research I found that on the Cassandra 1.2 driver page it says that the 2.0 version is not compatible with the 1.2 database, even though both the 2.0 and 2.1 drivers documentation pages say they are. Needless to say Datastax has some inconsistencies in what they are saying about the product. Unfortunately the saga isn’t over yet, a new issue has emerged that is being worked up so we can upgrade to 4.1, but I will follow up with a post on that at a later date.