The Chug

So far this year I have not been keeping to my blogging schedule. Since posting my themes for the year I have been AWOL too busy working on launching a product in my current role. That being said a friend of mine just launched a new show on YouTube called The Chug. If you are into craft beer like I am check it out at: http://thechug.life

Google Kubernetes Engine

Introduction

I have been messing around with Google Kubernetes Engine for the last few weeks now (as we are deploying my new app to it) and I have to say overall I am impressed. There has been a lot of talk about Kubernetes for a while and at first I was wondering if it was just the next piece of tech being over-hyped like so many things. Having used it now for a month I can say I understand why people are so excited about it. The learning curve is steep, but once you climb it, you will really appreciate the power of the platform.

As stated in my previous post, I have been building a Micro-service architecture in Golang and for deployment we decided to go with GKE. Go seems to be extremely friendly for docker containers, I have been using Alpine as my base and the container size of each service has been really tiny (around 10MB or less). This is quite a difference compared to Java containers that end up very large when you think of all the jar files that go into a typical spring boot app. There are a few things that you need to do to build your Go app for docker. You need to disable CGO so and tell the build to use Go’s networking for DNS resolution and not to rely on GLIBC’s as Alpine is built on musl libc. The other great thing about Go is the cross compiler is built into it, so you are an environment variable away from being able to compile your app for Linux even when running on a Mac as I do. The only other thing I do is add ca-certificates to my alpine base for SSL connections.

History

Initially when I started building my backend I was using Spring Cloud Gateway and Consul to handle load balancing and service discovery. When I went to bring our services to the cloud I discovered those would no longer be needed as Kubernetes has built in load balancing and service discovery. The integration between Spring Cloud and Consul is great, and what happens is you can just register your app name in consul and spring cloud will automatically route to it based on name. I have an Auth service named auth and I would hit spring cloud at http://localhost:8080/auth and it would look up the auth service in consul and route the request there automatically to my auth service which was running on 9999 at the /. I wanted to keep the same sort of approach in my Ingress and service discovery in Kubernetes.

Kubernetes the beginning…

 Initially I started using the default Google Ingress which behind the scenes provisions a Google Cloud Load Balancer (HTTP/HTTPS) load balancer and then should route the different requests to the different services based on your kubernetes service and Ingress. I was having issues getting the Google Ingress to work correctly with the URL rewriting that I wanted to maintain from my previous design and I discovered a second issue with it, which was it doesn’t support Websockets. We are planning on doing a lot of app communication between the backend and front end via a  websocket and this was going to be a painful limitation for us. We considered using Server Sent Events to push events to the client and rest calls for the messages from the client, but this wasn’t ideal.

I did some digging around and discovered that you can install NGINX as your ingress and behind the scenes it provisions a Google Cloud Load Balancer that is a TCP Loadbalancer. The advantage of this is that we can now send websockets into our cluster. Instead of our SSL terminating at the GCLB, it would now terminate at our Ingress when we were ready for it. As soon as I switched to NGINX all the URL rewriting issues I was seeing went away.

TLS Configuration

Once I had reached a point of having traffic into my cluster from the outside, I decided it was time to provision TLS. I discovered Certmanager which allowed me to configure my certificates through Let’s Encrypt my preferred certificate issuer. When I ran the SSL Labs test against my cluster I discovered that SSL was configured really well and I scored an A+ on their test. The only issue I am having is trying to figure out how to enable support for TLSv1.3 I haven’t been able to get the Ingress to support that even after messing with the NGINX configmap. It is supposed in the current version of the NGINX Ingress, but it is disabled by default and I am still fighting that part of the config.

Configuration examples

The final challenge that I faced was supporting multiple URLs for my cluster and just routing the Ingress based on what URL was being requested. I created a fanout Ingress which worked great but I struggled to find a config that would allow me to have multiple secrets for TLS depending on the URL. I finally found that config and thought I would share it here (which sensitive details changed).

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target
: "/"
nginx.org/websocket-services
: haskovec-api-service
name: nginx-haskovec-api-ingress
spec:
rules:
- host: api.haskovec.com
http:
paths:
- backend:
serviceName: haskovec-api-service
servicePort: 9999
path: /auth
- backend:
serviceName: haskovec-api-service
servicePort: 9998
path: /service2
- host: service2.haskovec.com
http:
paths:
- backend:
serviceName: service2-service
servicePort: 9999
path: /auth
- backend:
serviceName: service2-service
servicePort: 9998
path: /service2
# This section is only required if TLS is to be enabled for the Ingress
tls:
- secretName: api-certificate-secret
hosts:
- api.haskovec.com
- secretName: service2-certificate-secret
hosts:
- service2.haskovec.com


The above Ingress shows using multiple hosts connecting to multiple pods behind a service and routing based on the URL and domain name. It also shows what multiple TLS certificate secret stores look like.

To create your certificates with Certmanger you will need to configure an issuer  as below (again details changed):

apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-production
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
#Email address for acme registration
email: jeff@haskovec.com
#Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-production
#Enable the http-01 challenge provider
http01: {}

And a certificate:

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: api.haskovec.com
spec:
secretName: api-certificate-secret
commonName: api.haskovec.com
dnsNames:
- api.haskovec.com
issuerRef:
name: letsencrypt-production
kind: Issuer
acme:
config:
- http01:
ingressClass: nginx
domains:
- api.haskovec.com

You configure an additional certificate as above for each domain name that you are getting certificates for. And just like that certmanager automagically goes out to let’s encrypt gets certs for that domain name and stores it in the referenced secret.

Conclusion

All things considered I am blown away by working in a Kubernetes environment. Google takes away the pain of actually provisioning your cluster so you can focus on your app. Deployments are a breeze, I just push all my new docker containers, updated my deployment yaml file and apply it, and kubernetes does a rolling update of all my services. This is allowing us to basically have infrastructure like we are a company with a huge devops team when in reality we have no devops engineers. I will definitely be using this on my projects going forward!

Docker

For several months now I have been hearing all the hype on the blogs for Docker. I mostly have been ignoring the stuff, skim a post here and there but I haven’t been that interested in it. One of my coworkers has taken a big interest on the other hand and has started to work on putting out different services we run into containers.

When we started out with our new architecture we were requiring people to install different services to get their development environment up and running. At first this wasn’t that big of a deal, you need to install rabbitMQ in additional to JBoss and setting up a SQLServer database. Then we added memcached into the mix. At this point environment setup was getting pretty complex for anyone new we hired and our architect came up with a solution to make it easier. Use a virtual box image to host rabbitMQ and memcached as well as the newly added Solr and Zookeeper. This was a great solution for a while it allowed us to get people up and running much faster and add new things as we needed them (like Cassandra). Their are a couple of problems with this solution. If we roll out a new version of say Cassandra like we are doing you are going to lose all of your data. The other issue is our architect was promoted and this solution is no longer being maintained.

Enter docker! My coworker who was very interested in this technology started doing the research and work to set up all of our services inside of docker to make it easier to maintain and setup our environments than our current solution of virtual box and it has the potential to be used all the way from the development environment through the testing and staging environments into production. One of the big problem of the one big VM with all the services is you can’t just update 1 service at a time, you load it all which means your Cassandra data gets wiped out and you have to reimport that data even if Cassandra doesn’t need to be updated. When you have just one solution it means one person has to maintain the entire image, with Docker each service can be maintained by a different user. For example I am working on an upgrade to get us up to JBoss EAP 6.4 I could put the container with our customizations into Docker and everyone else could just pull down the latest without having to do most of the configuration.

We have a mixture of Linux and Windows on the client side at the office, and deploy to Linux. So since Docker is a native Linux solution we need to use boot2docker on Windows. We also have a couple of developers that use Macs they brought from home so they too would need something like the boot2docker approach. The Linux configuration of Docker is nearly complete and almost ready for people to start moving to. The Windows solution is having an issue connecting to Cassandra inside of the Docker container. Once we get that issue ironed out I expect that we will move forward in this direction as it seems like an amazing platform. In the end though I thought Docker was a lot of hype when I saw it splashed everyone in the blogs after beginning to use it I see why people are excited. It does have a bit of a learning curve but I look forward to messing around with it some more.