For several months now I have been hearing all the hype on the blogs for Docker. I mostly have been ignoring the stuff, skim a post here and there but I haven’t been that interested in it. One of my coworkers has taken a big interest on the other hand and has started to work on putting out different services we run into containers.
When we started out with our new architecture we were requiring people to install different services to get their development environment up and running. At first this wasn’t that big of a deal, you need to install rabbitMQ in additional to JBoss and setting up a SQLServer database. Then we added memcached into the mix. At this point environment setup was getting pretty complex for anyone new we hired and our architect came up with a solution to make it easier. Use a virtual box image to host rabbitMQ and memcached as well as the newly added Solr and Zookeeper. This was a great solution for a while it allowed us to get people up and running much faster and add new things as we needed them (like Cassandra). Their are a couple of problems with this solution. If we roll out a new version of say Cassandra like we are doing you are going to lose all of your data. The other issue is our architect was promoted and this solution is no longer being maintained.
Enter docker! My coworker who was very interested in this technology started doing the research and work to set up all of our services inside of docker to make it easier to maintain and setup our environments than our current solution of virtual box and it has the potential to be used all the way from the development environment through the testing and staging environments into production. One of the big problem of the one big VM with all the services is you can’t just update 1 service at a time, you load it all which means your Cassandra data gets wiped out and you have to reimport that data even if Cassandra doesn’t need to be updated. When you have just one solution it means one person has to maintain the entire image, with Docker each service can be maintained by a different user. For example I am working on an upgrade to get us up to JBoss EAP 6.4 I could put the container with our customizations into Docker and everyone else could just pull down the latest without having to do most of the configuration.
We have a mixture of Linux and Windows on the client side at the office, and deploy to Linux. So since Docker is a native Linux solution we need to use boot2docker on Windows. We also have a couple of developers that use Macs they brought from home so they too would need something like the boot2docker approach. The Linux configuration of Docker is nearly complete and almost ready for people to start moving to. The Windows solution is having an issue connecting to Cassandra inside of the Docker container. Once we get that issue ironed out I expect that we will move forward in this direction as it seems like an amazing platform. In the end though I thought Docker was a lot of hype when I saw it splashed everyone in the blogs after beginning to use it I see why people are excited. It does have a bit of a learning curve but I look forward to messing around with it some more.
Last Monday I got into the office and I decided that is it, I am going to get our app upgraded to Spring 4.1. I had been working on this off and on for like 9 months, updating dependencies in the pom, doing some testing, wash, rinse, repeat…
As I had mentioned in a previous posts one of the first issues I had was the new aspect j running the hibernate metamodel generator and dumping a bunch of generated class in the root level directory of wherever maven was running. I had opened a Jira against the aspectj-maven-plugin. There was even a user who contributed a patch for the issue, and the developer promised to look at it in January but months went by with no effort to resolve the issue. Now CodeHaus is shutdown and the active projects have moved to MojoHaus. As of yet the aspectj-maven-plugin hasn’t been moved so more and more it looks like my decision to download the code from their SVN repository and fork it on github was correct.
The first thing I did on Monday was to clone that repository and fix a couple of broken unit tests I had in it and then install it locally in our nexus server. Next up I updated the pom files to use my new version of the plugin and added the
<proc>none</proc> config option that was introduced in aspectj 1.8.2. I fired up my local environment to test everything and discovered one of the major features of the app was broken. It looks like the way Spring 4.1 handles the ConverterFactories. We use a lot of Hibernate UserTypes in our app to map things to enums. The enums implement a common interface that handles the mapping between the storage of that enum and the value. With a little tweaking on how those enums were used I was able to get Spring to convert them correctly again using our Converter Factories.
I was thinking everything was good to go at this point and my testing discovered that our new feature that is Angular.js driven was broken. After much tracing and debugging I determined that some
@ResponseBody parameters that in Spring 4.0 would accept an empty string in 4.1 that empty string was converted to a null and then the method was failing. I marked those methods to be
@ResponseBody(required = false) and we were back up and running.
By this point it was already Wednesday morning. So I submitted the code to git and let jenkins run our tests on it. I immediately started getting some unit test failures saying that it couldn’t find the
aspectOf() factory method. This didn’t make any sense as when you do compile time aspect weaving this method is supposed to be automatically added to your classes. I ended up digging through the aspectj source code as my first thought was what if somehow
@Aspect isn’t being processed when you turn off annotation processing. I wasted way too much time down this path before I realized that it wouldn’t make sense as the whole point of ajc is to compile the aspects. Then in digging through the jenkins build log I realized what was really happening. The maven-java-compiler plugin 3.1 was actually running after the ajc compiler thus rewriting my woven classes with unwoven classes. When researching this issue I see a lot of people set the java-compiler plugin under maven to be
<phase>none</phase>. We have ours set to
test-compile with the ajc compiler set for the
compile phase. When I tried to move
test-compile to the ajc compiler it didn’t like some things we were doing with generics. I feel like the ideal fix would be to do that and don’t use the maven one at all. But since I just wanted to get this in and working I just reverted back to the maven-java-compiler version 2.5.1 which doesn’t seem to fire up again and rewrite the class. Todo for the future will be to try to entirely use the ajc compiler for the modules which need aspects woven into them.
After I had the aspectj issues solved I started seeing out of memory errors in our unit test phase. I banged my head on this for a day and a half before I figured out for some reason the combination of powermock-1.6.2 and junit-4.12 seem to be leaking memory or not shutting down some threads that are fired up when running the test. I tested going back to powermock-1.5.6 and junit-4.11 (since 4.12 isn’t compatible with powermock-1.5.6) and finally everything was happen and working. It was code reviewed yesterday and is now merged into master. So I am pretty excited to finally have reached the end of this project. I am hoping that the aspectj-maven-compiler will be resurrected from the dead and brought into mojohaus with my fix so that I can go away from my forked copy, but time will tell on that. Up next will be figuring out how to migrate to Spring Security 4.0.