Introduction
I have been messing around with Google Kubernetes Engine for the last few weeks now (as we are deploying my new app to it) and I have to say overall I am impressed. There has been a lot of talk about Kubernetes for a while and at first I was wondering if it was just the next piece of tech being over-hyped like so many things. Having used it now for a month I can say I understand why people are so excited about it. The learning curve is steep, but once you climb it, you will really appreciate the power of the platform.
As stated in my previous post, I have been building a Micro-service architecture in Golang and for deployment we decided to go with GKE. Go seems to be extremely friendly for docker containers, I have been using Alpine as my base and the container size of each service has been really tiny (around 10MB or less). This is quite a difference compared to Java containers that end up very large when you think of all the jar files that go into a typical spring boot app. There are a few things that you need to do to build your Go app for docker. You need to disable CGO so and tell the build to use Go’s networking for DNS resolution and not to rely on GLIBC’s as Alpine is built on musl libc. The other great thing about Go is the cross compiler is built into it, so you are an environment variable away from being able to compile your app for Linux even when running on a Mac as I do. The only other thing I do is add ca-certificates to my alpine base for SSL connections.
History
Initially when I started building my backend I was using Spring Cloud Gateway and Consul to handle load balancing and service discovery. When I went to bring our services to the cloud I discovered those would no longer be needed as Kubernetes has built in load balancing and service discovery. The integration between Spring Cloud and Consul is great, and what happens is you can just register your app name in consul and spring cloud will automatically route to it based on name. I have an Auth service named auth and I would hit spring cloud at http://localhost:8080/auth and it would look up the auth service in consul and route the request there automatically to my auth service which was running on 9999 at the /. I wanted to keep the same sort of approach in my Ingress and service discovery in Kubernetes.
Kubernetes the beginning…
Initially I started using the default Google Ingress which behind the scenes provisions a Google Cloud Load Balancer (HTTP/HTTPS) load balancer and then should route the different requests to the different services based on your kubernetes service and Ingress. I was having issues getting the Google Ingress to work correctly with the URL rewriting that I wanted to maintain from my previous design and I discovered a second issue with it, which was it doesn’t support Websockets. We are planning on doing a lot of app communication between the backend and front end via a websocket and this was going to be a painful limitation for us. We considered using Server Sent Events to push events to the client and rest calls for the messages from the client, but this wasn’t ideal.
I did some digging around and discovered that you can install NGINX as your ingress and behind the scenes it provisions a Google Cloud Load Balancer that is a TCP Loadbalancer. The advantage of this is that we can now send websockets into our cluster. Instead of our SSL terminating at the GCLB, it would now terminate at our Ingress when we were ready for it. As soon as I switched to NGINX all the URL rewriting issues I was seeing went away.
TLS Configuration
Once I had reached a point of having traffic into my cluster from the outside, I decided it was time to provision TLS. I discovered Certmanager which allowed me to configure my certificates through Let’s Encrypt my preferred certificate issuer. When I ran the SSL Labs test against my cluster I discovered that SSL was configured really well and I scored an A+ on their test. The only issue I am having is trying to figure out how to enable support for TLSv1.3
I haven’t been able to get the Ingress to support that even after messing with the NGINX configmap. It is supposed in the current version of the NGINX Ingress, but it is disabled by default and I am still fighting that part of the config.
Configuration examples
The final challenge that I faced was supporting multiple URLs for my cluster and just routing the Ingress based on what URL was being requested. I created a fanout Ingress which worked great but I struggled to find a config that would allow me to have multiple secrets for TLS depending on the URL. I finally found that config and thought I would share it here (which sensitive details changed).
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.org/websocket-services: haskovec-api-service
name: nginx-haskovec-api-ingress
spec:
rules:
- host: api.haskovec.com
http:
paths:
- backend:
serviceName: haskovec-api-service
servicePort: 9999
path: /auth
- backend:
serviceName: haskovec-api-service
servicePort: 9998
path: /service2
- host: service2.haskovec.com
http:
paths:
- backend:
serviceName: service2-service
servicePort: 9999
path: /auth
- backend:
serviceName: service2-service
servicePort: 9998
path: /service2
# This section is only required if TLS is to be enabled for the Ingress
tls:
- secretName: api-certificate-secret
hosts:
- api.haskovec.com
- secretName: service2-certificate-secret
hosts:
- service2.haskovec.com
The above Ingress shows using multiple hosts connecting to multiple pods behind a service and routing based on the URL and domain name. It also shows what multiple TLS certificate secret stores look like.
To create your certificates with Certmanger you will need to configure an issuer as below (again details changed):
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-production
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
#Email address for acme registration
email: jeff@haskovec.com
#Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-production
#Enable the http-01 challenge provider
http01: {}
And a certificate:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: api.haskovec.com
spec:
secretName: api-certificate-secret
commonName: api.haskovec.com
dnsNames:
- api.haskovec.com
issuerRef:
name: letsencrypt-production
kind: Issuer
acme:
config:
- http01:
ingressClass: nginx
domains:
- api.haskovec.com
You configure an additional certificate as above for each domain name that you are getting certificates for. And just like that certmanager automagically goes out to let’s encrypt gets certs for that domain name and stores it in the referenced secret.
Conclusion
All things considered I am blown away by working in a Kubernetes environment. Google takes away the pain of actually provisioning your cluster so you can focus on your app. Deployments are a breeze, I just push all my new docker containers, updated my deployment yaml file and apply it, and kubernetes does a rolling update of all my services. This is allowing us to basically have infrastructure like we are a company with a huge devops team when in reality we have no devops engineers. I will definitely be using this on my projects going forward!