This Ghost Blog is now running with Let's Encrypt in a cheap bare-metal Kubernetes Cluster (on Hetzner Cloud) — Part 3/3
On how to run a Ghost blog with Let's Encrypt in a cheap bare-metal Kubernetes Cluster in Hetzner Cloud
Blog Deployment Descriptors
This will now describe the different sections of the YAML files deployed.
Storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rm3l-org-content
spec:
storageClassName: hcloud-volumes
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
As discussed previously, we leverage the specific Container Storage Interface (CSI) Implementation for Hetzner, which we installed previously. This will then make Hetzner create a volume and bind it to a single node in Hetzner Cloud.
Note that the CSI driver does not support at the time of writing ReadWriteMany strategy. As a consequence, only one node at a given point in time can access the volume created. We will see below that unfortunately we cannot have more than 1 replicas, and this may imply some downtime when creating the Replica.
Pods Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: rm3l-org
labels:
app: rm3l-org
org.rm3l.services.service_name: rm3l-org
spec:
# Only 1 replica here, due to PVC limitation in Hetzner!
replicas: 1
selector:
matchLabels:
app: rm3l-org
strategy:
# We accept some downtime, due to the fact that 2 containers cannot share the same PVC in Hetzner Cloud.
# The CSI Driver for Hetzner does not support ReadWriteMany access mode => we cannot use RollingUpdate
# strategy since this would imply more than one pod running at the same time
type: Recreate
template:
metadata:
labels:
app: rm3l-org
spec:
affinity:
podAntiAffinity:
# No 2 'rm3l-org' pods should be on the same node host
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rm3l-org
topologyKey: "kubernetes.io/hostname"
containers:
- name: rm3l-org
image: registry.gitlab.com/lemra/services/rm3l-org:<VERSION>
resources:
limits:
memory: "500Mi"
requests:
memory: "100Mi"
ports:
- containerPort: 2368
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "chmod 700 /data/ghost/populate_ghost_content.sh && /data/ghost/populate_ghost_content.sh"]
livenessProbe:
httpGet:
path: /
port: 2368
initialDelaySeconds: 30
periodSeconds: 90
timeoutSeconds: 60
readinessProbe:
httpGet:
path: /
port: 2368
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- mountPath: /var/lib/ghost/content
name: rm3l-org-content
env:
- name: url
value: https://rm3l.org
volumes:
- name: rm3l-org-content
persistentVolumeClaim:
claimName: rm3l-org-content
imagePullSecrets:
- name: gitlab-registry-services-creds
Note that the container image is pulled from my own private GitLab Docker registry, but the exact steps involved for this will be covered in a separate blog post.
Also, for dynamic resource management, I configured an AutoScaler, to ensure new pods are created once the CPU utilization is above 70% on average:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: rm3l-org
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: rm3l-org
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Services
apiVersion: v1
kind: Service
metadata:
name: rm3l-org
spec:
selector:
app: rm3l-org
ports:
- protocol: TCP
port: 2368
targetPort: 2368
type: ClusterIP
Here I want to expose the Service onto an external IP address, that’s outside of the cluster. The ClusterIP service type exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
Remember that NGINX is configured as our Ingress Resource, and as such, will serve as the main reverse-proxy entry-point to the application.
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: services-rm3l-org-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- rm3l.org
secretName: letsencrypt-staging
rules:
- host: rm3l.org
http:
paths:
- backend:
serviceName: rm3l-org
servicePort: 2368
This simply creates an Ingress resource in the cluster which does host-based routing and TLS/SSL termination. This means that the requested host allows the Ingress to determine the right backend service to route the request to. And this is what will actually trigger the Cert-Manager to:
- place orders for a certificate for rm3l.org host
- manage challenges requested by Let's Encrypt for domain validation
- manage certificate auto-renewal
Once this is done, the blog should be reachable via the NGINX Ingress External IP address, which I set as an A resource in my DNS provider settings for this domain. This allows "rm3l.org" to be resolved by the NGINX Ingress External IP Address.
Wrapping up
This was a pretty fun and exciting journey to playing and learning a lot more about Kubernetes (on both the provision-and-manage and the deploy-in-the-cluster perspectives).
Change being the only constant, my next step is now to attempt a different deployment strategy for this blog, now that Ghost 3.0 has been released with support for a true headless CMS.
I now look forward to deploying this blog using the JAMStack (as in JavaScript, APIs, Markup), using Gatsby.js (front-end) + Netlify (PaaS) + Ghost (headless CMS back-end).
Please stay tuned — this other journey will be the topic of another blog post.