Move old projects to dot-folder and try to fix links again.

parent 3ab518fa
......@@ -41,13 +41,13 @@ Lastly, I have split up the single mono-repo into individual repos to support si
### Added
- Created two new Kubernetes clusters: [palpantlab-gke-west1-b-01](README.md#palpantlab-gke-west1-b-01), the first GKE cluster, and [palpantlab-sfo](README.md#palpantlab-sfo), the single-node Kubernetes master run on `ubuntu-node-01`
- cert-manager and Certificate objects using NGINX ingress for all services.
- Deployed [boxomon](boxomon:README.md), a small 3-tier webapp that delivers nice messages every day, mostly to explore golang and Angular.
- Deployed [transmission-web](palpantlab-transmission:README.md), a quick transmission client that passes all traffic over a VPN.
- Deployed [boxomon](https://gitlab.palpant.us/boxomon), a small 3-tier webapp that delivers nice messages every day, mostly to explore golang and Angular.
- Deployed [transmission-web](https://gitlab.palpant.us/palpantlab-transmission), a quick transmission client that passes all traffic over a VPN.
### Changes
- Split the mono-repo into many sub-repos, each included as a submodule here.
- Redeployed most webservices onto GKE, including GitLab, personal websites, Boxomon, and NGINX ingress.
- Redeployed GitLab + CI as a [cloud-native helm chart](palpantlab-gitlab)
- Redeployed GitLab + CI as a [cloud-native helm chart](https://gitlab.palpant.us/justin/palpantlab-gitlab)
- Update Gitlab to 11.10.4 and allowed clone-over-SSH
- On-prem reverse-proxying and certificate management reverted to be handled by nas.sfo.palpant.us instead of nginx-ingress.
- Update ubuntu-node-01 to Ubuntu 18.04.2, NVIDIA drivers to 430.26, Linux kernel to 5.1.15, Kubernetes to 1.15.0, Docker to 18.09.3.
......
......@@ -24,8 +24,6 @@ With version 1.0.0 of this repo and the changes of the last year, I have migrate
Previously, I had set up a three-node Kubernetes cluster on an odd collection of hardware I had lying around. Persistence was handled through NFS, iSCSI, and a local Ceph installation. Ingress was handled with nginx-ingress, and kube-lego was set up to generate certificates. Monitoring was done with Prometheus Operator, which bundles scalable Prometheus deployments, highly available alertmanager, and stateless Grafana while adding some Custom Resource Definitions that abstract away scraping configs. Gitlab was running in the cluster, along with a number of websites.
Code that has been replaced with a newer version is removed with v1.0.0. Code that has no replacement has been moved to [legacy-projects/](legacy-projects).
See the [CHANGELOG](CHANGELOG.md) for what has been done so far, and when.
# Clusters
......@@ -51,7 +49,7 @@ MySQL version: `5.7`
### Cluster-hosted services
#### Ingress
Ingress is provided by [ingress-nginx](https://github.com/kubernetes/ingress-nginx), installed with Helm (chart: stable/ingress-nginx) and [cert-manager](https://github.com/jetstack/cert-manager) has been installed with Helm (chart: stable/cert-manager) to get certificates for requested domains. Configuration is managed completely in source code, stored in [palpantlab-ingress](palpantlab-ingress:README.md). DNS CNAMEs are managed with Google Domains to get traffic to redirect to the public IP of the single GKE LoadBalancer of the NGINX ingress service. I chose nginx-ingress because I had previous experience configuring it, and the Google L7 Ingress creates a _significant_ number of routing rules in GCP that cost a lot (when you're at the budget levels I'm working at), while with nginx-ingress I create only Kubernetes objects and consume CPU that I am already paying for.
Ingress is provided by [ingress-nginx](https://github.com/kubernetes/ingress-nginx), installed with Helm (chart: stable/ingress-nginx) and [cert-manager](https://github.com/jetstack/cert-manager) has been installed with Helm (chart: stable/cert-manager) to get certificates for requested domains. Configuration is managed completely in source code, stored in [palpantlab-ingress](https://gitlab.palpant.us/palpantlab-ingress). DNS CNAMEs are managed with Google Domains to get traffic to redirect to the public IP of the single GKE LoadBalancer of the NGINX ingress service. I chose nginx-ingress because I had previous experience configuring it, and the Google L7 Ingress creates a _significant_ number of routing rules in GCP that cost a lot (when you're at the budget levels I'm working at), while with nginx-ingress I create only Kubernetes objects and consume CPU that I am already paying for.
ingress-nginx helm chart: `nginx-ingress-0.31.0`
......@@ -62,7 +60,7 @@ cert-manager helm chart: `cert-manager-v0.4.1`
cert-manager image: `quay.io/jetstack/cert-manager-controller:v0.5.2`
#### Source control with locally hosted Gitlab
All of this actually started with hosting a private GitLab instance, https://gitlab.palpant.us, which ran on an Ubuntu virtual machine. That application not runs as a cloud-native Helm chart in GKE, and is managed in the submodule [palpantlab-gitlab](palpantlab-gitlab:README.md). It stores all source code for these various projects and GitLab CI/CD connects to both Kubernetes clusters, using them as build slaves and deploying services to them when there are updates.
All of this actually started with hosting a private GitLab instance, https://gitlab.palpant.us, which ran on an Ubuntu virtual machine. That application not runs as a cloud-native Helm chart in GKE, and is managed in the submodule [palpantlab-gitlab](https://gitlab.palpant.us/justin/palpantlab-gitlab). It stores all source code for these various projects and GitLab CI/CD connects to both Kubernetes clusters, using them as build slaves and deploying services to them when there are updates.
Prior to the move to GKE there were periods of significant instability and high-risk of data loss. However, data is now stored on GCE Peristent Disks, with LFS artifacts and daily backups (via Kubernetes CronJob) stored to GCS with object versioning enabled, greatly reducing the risk of data loss.
......@@ -83,10 +81,10 @@ GitLab runner version: `gitlab/gitlab-runner:alpine-v11.9.2`
I deploy a handful of static websites reliably on the cluster, providing reliability via replication and loadbalancing (via NGINX and Kubernetes) and low-latency (courtesy of Google Cloud's networking mostly). These include my personal website, [justin.palpant.us](https://justin.palpant.us) and my brother's, [tim.palpant.us](https://tim.palpant.us).
#### Boxomon
Boxomon was originally a small Arduino and touchscreen project I made in college as a present. After a few years the wiring began to fail, so I rebuilt it as small 3-tier web-app in 20 hours over the holidays as a Christmas present, and to experiment with Angular, golang, and Helm in the context of a real set of deployable services. The source code is nothing fancy, but is maintained at [boxomon](boxomon).
Boxomon was originally a small Arduino and touchscreen project I made in college as a present. After a few years the wiring began to fail, so I rebuilt it as small 3-tier web-app in 20 hours over the holidays as a Christmas present, and to experiment with Angular, golang, and Helm in the context of a real set of deployable services. The source code is nothing fancy, but is maintained at [boxomon](https://gitlab.palpant.us/justin/boxomon).
#### CloudSQL databases proxies
In order to provide access to the highly-reliable CloudSQL databases I created within the clusters, I run the CloudSQL proxy tool released by Google. The code to deploy that proxy is maintained at [palpantlab-sql-dbs](palpantlab-sql-dbs:README.md).
In order to provide access to the highly-reliable CloudSQL databases I created within the clusters, I run the CloudSQL proxy tool released by Google. The code to deploy that proxy is maintained at [palpantlab-sql-dbs](https://gitlab.palpant.us/justin/palpantlab-sql-dbs).
CloudSQL proxy version: `gcr.io/cloudsql-docker/gce-proxy:1.11`
......@@ -228,17 +226,17 @@ The DSM212 provides a simple HTTP reverse proxy functionality with LetsEncrypt c
The DSM212 also runs an LDAP server, which is currently used for login service on the one node in the cluster, and is also exposed to the internet as `ldap.palpant.us:686`. That LDAP service is used by GitLab as a login provider, and is secured by TLS which is terminated at the NAS.
#### CloudSQL databases proxies
In order to provide access to the highly-reliable CloudSQL databases I created within the clusters, I run the CloudSQL proxy tool released by Google. The code to deploy that proxy is maintained at [palpantlab-sql-dbs](palpatnlab-sql-dbs:README.md).
In order to provide access to the highly-reliable CloudSQL databases I created within the clusters, I run the CloudSQL proxy tool released by Google. The code to deploy that proxy is maintained at [palpantlab-sql-dbs](https://gitlab.palpant.us/justin/palpantlab-sql-dbs).
CloudSQL proxy version: `gcr.io/cloudsql-docker/gce-proxy:1.11`
#### GitLab CI Runner
The GitLab runner which interacts with `palpantlab-sfo` is tagged with tags `k8s`, `sfo`, and `gpu`. It uses nvidia-device-drivers to provide access to the NVIDIA GPU directly to automated builds, should it be necessary (e.g. to run tests that use tensorflow-gpu). That runner is maintained with the rest of the GitLab deployment, in [palpantlab-gitlab](palpantlab-gitlab:deploy/kubectl-apply/sfo).
The GitLab runner which interacts with `palpantlab-sfo` is tagged with tags `k8s`, `sfo`, and `gpu`. It uses nvidia-device-drivers to provide access to the NVIDIA GPU directly to automated builds, should it be necessary (e.g. to run tests that use tensorflow-gpu). That runner is maintained with the rest of the GitLab deployment, in [palpantlab-gitlab](https://gitlab.palpant.us/justin/palpantlab-gitlab).
GitLab runner version: `gitlab/gitlab-runner:alpine-v11.9.2`
#### transmission-web
I deployed the open-source project [haugene/docker-transmission-openvpn](https://github.com/haugene/docker-transmission-openvpn) as a Kubernetes Deployment and Service on the on-prem cluster. It has access to the NAS as well as to the filesystem on the local node, and it has sufficient privilege to open an OpenVPN tunnel, so all torrent traffic goes through NordVPN. This service is not authenticated, so it is not exposed on the internet; you must have access to the Kubernetes API to tunnel the service ports to access the WebUI. The deployment of this webapp is managed at [transmission-web](transmission-web:README.md)
I deployed the open-source project [haugene/docker-transmission-openvpn](https://github.com/haugene/docker-transmission-openvpn) as a Kubernetes Deployment and Service on the on-prem cluster. It has access to the NAS as well as to the filesystem on the local node, and it has sufficient privilege to open an OpenVPN tunnel, so all torrent traffic goes through NordVPN. This service is not authenticated, so it is not exposed on the internet; you must have access to the Kubernetes API to tunnel the service ports to access the WebUI. The deployment of this webapp is managed at [palpantlab-tranmission](https://gitlab.palpant.us/justin/palpantlab-transmission)
Docker image: `haugene/transmission-openvpn`
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment