Kubernetes and Helm
Local Deployment
The easiest way to run a local k8s cluster is by using kind. Kind runs a local k8s cluster in docker, so we need to install docker first.
Setup
brew install --cask docker
Then we can install kubectl, helm and kind. Kubectl is used to control a k8s cluster.
brew install kubectl helm kind
Create a shell script with a name of your choice and put it in your $PATH.
Remember to make it executable with chmod +x.
This script will create and run a local k8s cluster with a local docker registry.
This script will also install NGINX as an ingress controller.
create-local-cluster.sh
#!/bin/sh
set -o errexit
# create registry container unless it already exists
reg_name='kind-registry'
reg_port='5000'
running="$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)"
if [ "${running}" != 'true' ]; then
docker run \
-d --restart=always -p "127.0.0.1:${reg_port}:5000" --name "${reg_name}" \
registry:2
fi
# create a cluster with the local registry enabled in containerd
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
endpoint = ["http://${reg_name}:${reg_port}"]
EOF
# connect the registry to the cluster network
# (the network may already be connected)
docker network connect "kind" "${reg_name}" || true
# Document the local registry
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
# install nginx ingress
# https://kind.sigs.k8s.io/docs/user/ingress/
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Run the script and you should get the following output.
Output from running create-local-cluster.sh
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Verify that you can connect to the cluster.
kubectl cluster-info
....
Kubernetes control plane is running at https://127.0.0.1:62502
CoreDNS is running at https://127.0.0.1:62502/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
To access the published helm chart and docker image at Gitlab we need to create a personal access token as described here. Make sure you create the token with scope api. When you have your access token we need to add to our local k8s cluster like so. Replace <username> and <email> with your personal Gitlab user details, and <access_token> with your newly created token.
kubectl create secret docker-registry secret-registry-key --docker-server=registry.gitlab.com --docker-username=<username> --docker-password=<access_token> --docker-email=<email>
Add the Gitlab helm chart repository to helm. Replace <username> and <access_token> with the Gitlab values.
helm repo add --username <username> --password <access_token> finsta https://gitlab.com/api/v4/projects/28581441/packages/helm/stable
Deploy
Install the latest development release of Finsta.
helm install dev finsta/finsta-service-app --devel
Verify that deployment has started.
kubectl describe pod
....
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s default-scheduler Successfully assigned default/finsta-service-app-68db7d4bc9-x4w8j to kind-control-plane
Normal Pulling 22s kubelet Pulling image "registry.gitlab.com/tritt/finsta/finsta-service-app:0.1.4-beta.2"
After a while the application should be running.
kubectl get pod
....
NAME READY STATUS RESTARTS AGE
finsta-service-app-68db7d4bc9-x4w8j 1/1 Running 0 2m45s
You can look at the log like so. You need to replace the random name of the pod.
kubectl logs finsta-service-app-68db7d4bc9-x4w8j
....
__ __ _ _
| \/ (_) ___ _ __ ___ _ __ __ _ _ _| |_
| |\/| | |/ __| '__/ _ \| '_ \ / _` | | | | __|
| | | | | (__| | | (_) | | | | (_| | |_| | |_
|_| |_|_|\___|_| \___/|_| |_|\__,_|\__,_|\__|
Micronaut (v3.0.0-RC1)
07:50:50.743 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [k8s, cloud]
07:50:56.204 [main] INFO io.micronaut.runtime.Micronaut - Startup completed in 6047ms. Server Running: http://finsta-service-app-68db7d4bc9-x4w8j:8080