Jeric Dy
Life is a Game
HomeAll Projects Mega Man Legends: OML Sworld Mega Man Legends (SP)


Blog Publications Photography Contact Resume

Just sharing this for my future reference as well.

  1. Install Ubuntu

    Raspberry Pi OS (previously Raspbian) has not released its 64-bit build yet. I also do not want to be bothered by the iptables work around for Traefik in Raspberry Pi OS.

    1. Download the 64-bit image here
    2. Restore the image to your SD card/s.
    3. Unlike in Raspberry Pi OS, do not need to create an ssh file in /boot to enable ssh.
    4. append:
      					cgroup_memory=1 cgroup_enable=memory
      to your /boot/firmware/cmdline.txt file.
    5. If you forgot the previous step, just ssh in to your Pi, modify said file and reboot.
  2. Install K3s

    1. On your master node: If your master node is behind a router (i.e. you are port forwarding),
      					curl -sfL | INSTALL_K3S_EXEC="--tls-san [your router IP]" sh -s -
      					curl -sfL | sh -s -
    2. Check master K3s status with:
      					systemctl status k3s.service
      No error? Good.
    3. Copy the k3s kube config file to your client (probably your desktop pc). The file should be located at /etc/rancher/k3s/k3s.yaml on master node. Copy it to your client's .kube/config
    4. Modify .kube/config file, change clusters[0].cluster.server value from to whatever your master node ip is (if you are port forwarding, your router's IP) example:
    5. Check connection from client with:
      					kubectl get node
      Expected result:
      					NAME   STATUS   ROLES                  AGE   VERSION
      					master Ready    control-plane,master   10m   v1.20.5+k3s1
    6. Get the K3s token on your master with:
      					sudo cat /var/lib/rancher/k3s/server/node-token
    7. On your worker nodes:
      					curl -sfL | K3S_URL=https://[master IP]:6443 K3S_TOKEN="[K3s token]" sh -
    8. Check worker K3s status with:
      					systemctl status k3s-agent.service
      No error? Good.
    9. Check connection from client with:
      					kubectl get node
      Expected result:
      					NAME   STATUS   ROLES                  AGE   VERSION
      					master Ready    control-plane,master   20m   v1.20.5+k3s1
      					worker Ready    <none>                 10m   v1.20.5+k3s1
    10. Reference:
  3. Optional: Install Web UI (Dashboard)

    1. On your client, run:
      					kubectl apply -f
    2. Create admin user
      					cat <<EOF | kubectl apply -f -
      					apiVersion: v1
      					kind: ServiceAccount
      					  name: admin-user
      					  namespace: kubernetes-dashboard
    3. Give cluster admin role to admin user
      					cat <<EOF | kubectl apply -f -
      					kind: ClusterRoleBinding
      					  name: admin-user
      					  kind: ClusterRole
      					  name: cluster-admin
      					- kind: ServiceAccount
      					  name: admin-user
      					  namespace: kubernetes-dashboard
    4. Get bearer token:
      					kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token
    5. Run proxy from client
      					kubectl proxy
    6. On your browser, open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/. Enter the bearer token you have obtained.
    7. Reference:
  4. Optional: Install Longhorn

    Installing longhorn will allow you to have dynamic provisioning for persistent volume claims.
    1. On your client, run:
      					kubectl apply -f
    2. Wait a little while, you may check the status of the pods via the web ui or via:
      					kubectl -n longhorn-system get pods
    3. Test! Create a PVC. On your client, run:
      					cat <<EOF | kubectl apply -f -
      					apiVersion: v1
      					kind: PersistentVolumeClaim
      					  name: test-pvc
      					  - ReadWriteOnce
      					  storageClassName: longhorn
      					      storage: 1Gi
    4. Test! Create a Pod that uses the PVC. On your client, run:
      					cat <<EOF | kubectl apply -f -
      					apiVersion: v1
      					kind: Pod
      					  name: volume-test
      					  - name: volume-test
      					    image: nginx:stable-alpine
      					    imagePullPolicy: IfNotPresent
      					    - name: test-pvc
      					      mountPath: /data
      					    - containerPort: 80
      					  - name: test-pvc
      					      claimName: test-pvc
    5. Check status, on your client, run:
      					kubectl get pods
      If all goes well, you should see the pod running well:
      					NAME          READY   STATUS    RESTARTS   AGE
      					volume-test   1/1     Running   0          29s
    6. Clean up:
      					kubectl delete pvc/test-pvc pod/volume-test
    7. Optionally, you can open up Longhorn UI Ingress with:
      					kind: Ingress
      					  namespace: longhorn-system
      					  name: longhorn-ingress
      					  - host:
      					      - path: /
      					        pathType: Prefix
      					            name: longhorn-frontend
      					              number: 80
      If you do not own a domain, just mock it with /etc/hosts and it should work just fine.
    8. If you are not serious about storage replicas, you may want to change it to 1 replica, default is 3 replicas.
      					kubectl -n longhorn-system edit cm/longhorn-storageclass
      change numberOfReplicas to 1
    9. Reference:
  5. Optional: Use USB Storage Device for Longhorn

    1. Connect your USB storage device to one of your nodes.
    2. ssh into that node.
    3. Create mount point
      					sudo mkdir /media/storage
    4. Get the PARTUUID with:
      					sudo blkid
      You should have something like:
      					/dev/sda1: LABEL="mystorage" UUID="[device uuid]" TYPE="ext4" PARTUUID="[partition uuid]"
    5. modify your /etc/fstab file by adding the line (note: my external device file system is ext4):
      					PARTUUID=[partition uuid] /media/storage ext4 defaults,noatime,nodiratime 0 2
    6. You can test if it mounted correctly with:
      					sudo mount -a
    7. K3s will run as root so it will have full access to the device. You can optionally create a group for the directory and add yourself to give yourself access to the files:
      					sudo groupadd [group name]
      					sudo usermod -aG [group name] [your username]
      					sudo chown -R :[group name] /media/storage
      					# set the gid bit so all files/directories created will have the same group as the parent directory
      					sudo chmod g+s /media/storage
    8. Add the storage to longhorn, access the longhorn UI, go to nodes and select "Edit node and disks", click "Add Disk". Fill up the details and click save.
    9. And you are good to go!