With the completion of the migration to using Kubernetes, this project has reached the first usability milestone, so I'm going to call it v0.1.0.

I'll continue to use tags and milestones for large sets of features or usability improvements! Insignificant changes (manual scaling, single new components, tweaks or improvements) will get patch-versions. Significant changes or milestones will be minor versions. Major versions will not happen, except maybe 1.0.0 when I think the cluster is "ready" and "stable". Who knows if that will ever happen!

See the README for the current state of affairs.

This release focuses on the complete transition from Docker Swarm to Kubernetes.

The primary improvements of this release are:

  • Conversion from Docker Swarm to Kubernetes
  • Converting the Macbook "node" from running VirtualBox ( to booting Ubuntu directly (, for a gain of a few gigs of RAM and significant CPU performance.
  • Increase reliability of reverse proxying by using ingress-nginx and getting away from manually configured HAProxy. HAProxy is powerful but my administration of it was poor. Likewise Traefik was excellent, but the complexity of the system wasn't worth it. Auto-configured nginx with kube-lego is much better.
  • Increased usability, in my book mainly around ingresses and monitoring. Creating an Ingress and a ServiceMonitor couldn't be easier, and gives you free metrics collection and reverse proxying that are performant and well-configured.

The project has come a long, long way from when I forwarded ports to my NAS and used the DS212 as a reverse proxy for three websites! It's been an exciting trip to learn about proxying and services; setting up infrastructure like DNS, LDAP, NFS; diving into the world of containers, containerizing everything, and then getting multiple hosts; and now, finally, making the first moves towards a proper private cloud that keeps up with the Joneses (in every way except performance)!

In the current release, there are two Kubernetes nodes: ubuntu-udoo-01 and ubuntu-mac-01. The router forwards ports 80 and 443 to the first, which hits on the NGINX ingress controller. This controller proxies to services inside the cluster, and outside the cluster. Three websites run on the NAS, one on the router, and the rest within the cluster.

LDAP also resolves to the Kubernetes cluster internally, where it hits on the NGINX ingress controller and is proxied at L4 straight to the NAS.

kube-lego is used to automatically get certificates from LetsEncrypt and store them in Kubernetes, hence the need for port 80 to be forward to the cluster.

Data persistence is done using the NAS, with a hand-configured NFS folder being used for the private container registry, and manually provisioned iSCSI LUNs being used for GitLab and Prometheus.

Monitoring the cluster is done with Prometheus Operator by coreos - a single instance of Prometheus, a 3-replica HA set of alertmanagers, and a Grafana webUI, all exposed by Ingresses. Very little scraping beyond the default deployment is done, except for scraping Grafana itself, and the nginx ingress controller's VTS stats.

Docker Swarm has been purged from the system, never to return! It was fun while it lasted.

For upcoming infrastructure features, see the README! Lots of exciting things do to :)