The road to Spring Framework 4.1


Earlier this year Spring Framework 4.1 was released. I was excited to try out the new features in our project at work and having previously upgraded us from Spring 3.1 to Spring 3.2 and from Spring 3.2 to Spring 4.0 I was expecting this to be another routine Spring update, but alas that was not to be.

One of the first things I do when looking to do a major version upgrade is to check all the dependency versions for the libraries to make sure all of our libraries are new enough to do the upgrade. In this case I discovered that Spring 4.1 requires Jackson 2.x and we were running Jackson 1.9.x.

In theory this isn’t a huge deal. The big breaking change in Jackson 2 is a new code namespace.  So it was basically change some imports fix a few settings where you declare your ObjectMappers and you are good to go. So I made all the said changes and all looked well. Unfortunately a few of our unit tests broke.  It was then that I discovered that one of our data structures being written to Cassandra had a couple of Joda Dates in them that instead of being written out as a nice time stamp string it was actually serializing out the internal structure of the date time class.  And was Jackson 1.9 would deserialize that without issue the 2.x wouldn’t. To make matters worse we had over 40 million records written to our Cassandra database in this incompatible format. Given the size of the data and the inability to take an outage to fix the data structures I needed to come up with a solution to this problem.

At that point I had to revert the changes and they couldn’t go into our build for that week. As it happens I was fortunate enough to attend SpringOne 2GX 2014 so while there I attended a talk about upgrading to Spring 4.1. After the talk I went up to the speaker and explained my dilemma, but he didn’t seem to have any useful advice for me.  I think there was maybe a suggestion about writing a process that deserializes with the old library and then rewrites the record with the new library which is doable given the new namespace. However that solution wouldn’t work for us as we can’t break the functionality while the data is being updated to the new format.

The solution that I settled upon when returning to the office was instead to write a custom Deserializer for DateTime objects. My custom Deserializer understood both the old and the new formats and would look at the content of the JSON and based on that pull the relevant data to construct our DateTime Object. Then on the serialization side I used the Jackson plugin for JodaTime to serialize. So what happens is anytime a record is written in the Cassandra table it is written in the new format and if it is read we can read both the formats. This was the only workable solution I could come up with given our requirements. At that point I noticed with my new change set we were still getting the old Jackson 1.9 libraries in our build as well as the new version. Upon investigation I found that our old 1.2 Cassandra driver was pulling in the old Jackson library. I checked the documentation for the 2.0 driver and it said it was compatible with 1.2 versions of the database. The 2.0 driver did change the way they collect metrics but it wasn’t too large of a change so I made those changes as well so we could finally get rid of all traces of the old version of Jackson.

My code passed all on unit tests and testing I did I my local machine so I pushed it to the code base right after a branch cut to get maximum time for all other developers to run it and test it with me. Everyone ran it all week and it ran great. It was cut in the next weeks build and it sailed through the QA environment. Our staging system ran without a hitch and we were all set to deploy. That week due to unrelated issues the deploy was delayed one night to accommodate some unrelated issues found in the QA process.

And then out of the blue my phone rings at 1am.  It is my boss.  It turns out they are having issues with the deploy. Now for me to be called at 1am means that there have been other issues in this deploy since at that point we are 2 hours into the call. Needless to say this wasn’t a good night from an operations standpoint. Even though this sailed through all test environments in our production system we couldn’t connect to the Cassandra cluster. I was asked whether it was safe to just change the Cassandra version in the pom back to the old version or if we had to revert the entire changeset (including all the Jackson fixes). My first thought in the fog that I was still in from being abruptly woken up is we could just change that lib and then we remembered all the metrics gathering code that had changed. So at that point we had to do a revert of the whole changeset and spin a new build to finish the deploy. At 2:15am I returned to bed and once again we were still on the old Jackson.

It was determined that the Production System was running a different OS version than our Staging Server and that was believed to be the cause of the Cassandra driver failure. So the next week I added back in all my Jackson work, but I left the old Cassandra driver. That finally did it and we solved the first issue we would face on this path to upgrade to Spring 4.1. When I did some followup research I found that on the Cassandra 1.2 driver page it says that the 2.0 version is not compatible with the 1.2 database, even though both the 2.0 and 2.1 drivers documentation pages say they are. Needless to say Datastax has some inconsistencies in what they are saying about the product. Unfortunately the saga isn’t over yet, a new issue has emerged that is being worked up so we can upgrade to 4.1, but I will follow up with a post on that at a later date.

, ,