Spring Security 4.0

I was checking the Spring Blog today to see what was new after taking much of the week off. I came upon the following entry. Of course I was very interested as Spring Security 4.0 has been hyped for a few months now so I figured I would check out the migration guide from 3.2 to see what will be involved for us to upgrade. I found this is the new feature section. They have added a feature which will now automatically prepend ROLE_ to any roles you use in Spring security if your role doesn’t start with that. So if you have a role called ROLE_USER for a standard user you can now just say @PreAuthorize(“hasRole(‘USER’)”).

The problem I have with this, is it assumes that you name all your roles with ROLE_. Unfortunately we don’t. When we migrated to Spring Security for our new architecture we already had a system that had over 700 different user rights in it. We mapped our user right system into Spring Security by implementing GrantedAuthority on our UserRights. This allowed us to maintain compatibility with our legacy architecture which some of our system runs on while seamlessly migrating other parts over to the new architecture. This is one of those opinionated things about the framework that is going to cause us a lot of issues. Now we are forced to conform to what Spring Security thinks we should name things instead of it just checking our collection of granted authorities to see if we have permission to carry out an operation. This is one of those changes that risks breaking things for many people with what gain you don’t have to type 5 characters? I don’t really get it and am annoyed as it means maybe we have to do some ugly hack like have our getter prepend that so we can keep our legacy name, but still satisfy the new “feature”. If nothing else it means it will be longer before we think about making this change as why add extra pain for yourself, unless you see a big benefit which at that point I don’t see a super compelling reason why I need to upgrade right now. So maybe we will get around to upgrading in the next year time will tell.

Cassandra Data Modeling

I ended up having to miss the JHipster webinar last week as I was invited by my company to attend the Datastax DS220: Data Modeling with Datastax Enterprise class on Monday and Tuesday. The company came out and taught the class onsite. The instructor was Andrew Lenards and he did a great job.

I have been using Cassandra for a little while, but I hadn’t done anything serious with it. The CQL query language is all at once a great blessing and a curse. On the upside it is immediately familiar so anyone who has done SQL work can get comfortable creating tables and executing queries quickly. On the downside it sort of abstracts a few things about the data store away from you and I think at a certain point for performance you sort of need to understand what is going on under the hood. This class gave us that. It starts out presenting a data model like you might see in relational databases and then you work through the ways you might model that data in Cassandra and the trade offs of different models (which questions you can ask, which fields are required to ask those questions, etc). One of the biggest things I was missing prior to the class was the whole concept of partitions vs rows and what the partition key is vs the collating keys. I had been using the data store like a SQL database so that my partitions always had at most one row. We did a lot of looking at instead what if we model the data so the partitions have many rows and what are the advantages and disadvantages of doing so. On day two we got very deep in the technical aspects of what was going on under the hood, how data was stored on disk and how to do things like estimate partition sizes. We were also able to ask a lot of questions specific to how we have been using Cassandra in our organization and what the limitations are going to be as we expand its usage to even more areas of our product.

I think if anyone is going to roll out Cassandra at the enterprise level you really need to take this class. Based on what we covered it is going to save us a lot of future pain as well as making us much more productive with our current usage. Prior to this class we had one person on our team who had a pretty deep understanding so this was a way to get many of us senior engineers up to speed very quickly.

Signal Messenger

I just wanted to mention signal has been released so for all the iPhone users out there it is definitely worth installing. Signal is an implementation of Text Secure on IOS. Given the insecurity of text messages and how many other messengers have varying degrees of security Open WhisperSystems has released Signal. Some popular chat programs like WhatsApp are starting to encrypt but they aren’t always encrypted. One of the biggest benefits of Signal is they released the source code so that anyone can audit the code. Granted if you are installing it from the iTunes store there is still the risk of a back door being in there, but one inclined and with a developers license could build their own and install it on their phone. Of course that also assumes that your copy of XCode hasn’t been tampered with. At the end of the day if you are a target they are probably going to get your stuff, but the benefits of widespread use of secure products would be to disallow wholesale metadata and data collection that has been alleged to have been happening so that innocent people are left alone. The EFF has a nice scorecard that ranks the security aspects of different messenger apps. Here is another story suggesting the use of this app. If you are on Android Signal is an implementation of TextSecure.

Cobertura is gone and Clover is here

I have spent most of this week working on integrating Clover into our environment and ripping out Cobertura. I ran into a couple of issues along the way, but we are up and running now. First one thing I dislike about Clover is by default they will mess with the maven artifacts that you may intend to ship. I think this is actually a poor way to instruct people to configure it out of the box because you are basically saying you only run it every so often on different builds or you end up having to invoke maven multiple times or other associated hacks. I didn’t like any of those options as the idea is to fail the build if coverage drops below the acceptable level and not accept the commit until that is addressed. Luckily I stumbled upon the clover2:instrument option that you can use instead of the default recommended clover2:setup goal. But then I hit a second problem, the way it names the instrumented classes with the clover2:instrument option seemed to be clashing with the JPA 2.0 Metamodel generator that we were using. I had sort of been looking for an excuse to rip that whole thing out of the project for a while and now I finally had that so I removed it from our software and replaced it with just reflection on the classes and used unit tests to verify at test time that the code wasn’t broken instead of the compile time checks we would get with the metamodel. With that gone clover integrated greatly and I got it wired into our Jenkins configuration. Today I was able to get our configuration manager to install the clover plugin into Jenkins instead of using the publish html report option and we have much nicer integration. With the Sonar Clover plugin we now have integration with Sonarqube. The Sonar plugin brings in the coverage but it no longer lists technical debt like the Cobertura does. So aside from that I think this is going to be a much better solution for us going forward and was glad we could finally switch.

TLS in HTTP/2

I came across this blog post on Hacker News this morning. I thought it was a great blog post so I figured I would share it. Here there is a group of people that were trying to weaken the HTTP2 standard by not requiring TLS encryption in the standard as originally proposed and Google and Mozilla are working around that by requiring it for HTTP2 standard in their browsers. I think they are taking the right stand here as there is no excuse to not be encrypting anymore, and by them taking this stand it will encourage more people to get on board with TLS while at the same time getting the performance benefits of the new protocol.

On my site here I have the SPDY protocol enabled and hope to have full HTTP2 support as soon as nginx supports it. I did find it slightly ironic that the great blog poster mentioned about doesn’t actually have TLS enabled on his blog. Let’s go people it is 2015 time to secure your sites. Here is a story I found in the comments on the blog about the people who were working to undermine the new protocol also worth a read.

Clover and Wikitree

Good news this week. Our purchase of Clover was approved and we will have our license keys in a matter of days. As of tomorrow it is going into our build and Cobertura is getting ripped out. You may recall I previously wrote about my issues with Cobertura. One problem was the latest version at the time 2.0.3 didn’t work with Powermock, even though 1.9.4.1 did. And the second issue I was having with it was the lack of Java 8 support since we are close to upgrading on our project at work. Well oddly enough early in this week I saw Cobertura had a new maven plug and a new release 2.1.1. I immediately updated to the 2.7 plugin to give it a go and it promptly failed on Powermock like 2.0.3. So I didn’t feel bad at all when 2 days later I found out our Clover purchase request had been approved.

The other thing I have been messing around with in my spare time is Wikitree. Wikitree is basically a Wiki meets Ancestry. Full disclosure I am currently an Ancestry.com subscriber. What I like about Wikitree vs Ancestry is that in theory it is 1 tree that everyone is working on. Instead of everyone having their own trees and you sort of connecting to other people researching common ancestors and pulling some of their data the goal of this project is just one tree and you link in when you meet up at common ancestors. This seems like a better model for collaboration assuming that the people with your common ancestors are open to working with you on their pages. They also seem to do privacy well which Ancestry does in that if people are alive there is a much more limited set of information that is released and the farther back you go the more information that is public. Now I have one HUGE complaint with wikitree. They don’t support SSL. Not for the entire site, nor even just the authentication page. This is pretty ridiculous in 2015. If they had it on the authentication page at least your password wouldn’t be flying in plain text, but your session could still be hijacked. At this point I think it reflects pretty poorly on any organization if they don’t support basic web security. Worse on a site like this where they purport to protect the privacy of people working on it, without actually taking the most basic of steps to do so.

Me being me I figured I would email Chris Whitten the founder of the site. And to his credit he immediately got back to me, however his response was less than satisfactory. He stated the following:

Hi Jeffrey,

We worked on implementing SSL site-wide and it was much more difficult
than expected. We could protect just your login password, if that’s
your concern.

Chris

To me this sort of is someone who fundamentally doesn’t really get it. While protecting the privacy of my login password is a good start at this point given the sensitivity of some of the information (Birth dates, Mother’s maiden names, etc), I would expect that security is a higher priority at the site, but apparently that isn’t the case. Hopefully they make it more of a priority as I would like to make it my main family tree site and migrate away from Ancestry, but if they don’t care about their members security I am not sure if I will be able to do so.