Scroll Top

Setup kubernetes cluster on linux ubuntu (works using vm, cloud or even virtualbox etc).

We can using debian/ubuntu based linux too. Im using 1 master and 2-3 worker. Tested in 2024-2025 still works well.

For example in this cluster configuration like this:

Name IP Address Role
k8s-master1 10.184.0.2 master,control plane
k8s-worker1 10.184.0.3 worker/minion
k8s-worker2 10.184.0.4 worker/minion

My topology will like this.

Lets start…

Update the master and worker.

apt update && apt upgrade -y

And apply this tune.

tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
reboot

Run this in all server. Actually we don’t need docker for build k8s. For some reason my hands always itch to install it. you can skip installing docker if you want.

sudo apt-get install -y ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo   "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates  iputils-ping telnet

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo apt update
sudo apt-get install -y kubelet kubeadm kubectl
sudo systemctl enable kubelet
sudo apt-mark hold kubelet kubeadm kubectl

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

systemctl enable containerd
systemctl restart containerd

As you can see before, im using apt-mark hold to prevent packages upgrade automatically by the system. Because i want to upgrading kubernetes flow is under my control.

Okay, now the part create network cluster. Run this command only once, and run only on master.

kubeadm init --pod-network-cidr=10.244.0.0/16

You will get output like this.

Your Kubernetes control-plane has initilized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.184.0.2:6443 --token zv4ph6.mj2x64qhxycqb35w \
        --discovery-token-ca-cert-hash sha256:01284d33201ff6f977e8db5yyyf202as7hcaf4584f97f8e48354c0639e49

Run this part on the master server only:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run this part on the worker node server only to Join Worker into the cluster

kubeadm join 10.184.0.2:6443 --token zv4ph6.mj2x64qhxycqb35w \
        --discovery-token-ca-cert-hash sha256:01284d33201ff6f977e8db5yyyf202as7hcaf4584f97f8e48354c0639e49That join code command you can get from last command when you run on the master...

That join code command you can get from last command when you run on the master…

Network Manifest

somepeople maybe love flannel or other. But in this cluster i prefer to choose Callico. So we will deploy Callico as our CNI (container network interface). Run this command in master.

curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml -O

kubectl apply -f calico.yaml

And run again :

kubectl get nodes

Makesure all server status is ready. Sometimes you need 1-2 minutes for server to ready. (it takes long when im using slow spec server).

LABEL the server kubernetes (k8s)

sometimes i give label too on my kubernetes server, like this.

kubectl label nodes k8s-master1 kubernetes.io/role=master
kubectl label nodes k8s-worker1 kubernetes.io/role=worker
kubectl label nodes k8s-worker2 kubernetes.io/role=worker

that name of server depend on your hostname server. My output like this:

Dashboard Kubernetes

Now we will installing kubernetes dashboard on our cluster. Run this on master.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

After install kubernetes dashboard you can not access directly because it is only running on internally. I prefer expose it as NodePort so i can access through my IP.

Run this on master:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

On the bottom change Type: ClusterIP as NodePort

Type** :wq! **for save.

You can running this command to check what port running…

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

root@k8s-master1:~# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.103.3.221 443:30000/TCP 3d11h

As you can see my dashboard running on port 30000. So you can access the dashboard now using address like this on the browser: https://IPAddress:30000

You can edit svc (last command) to custom port too if you want.Just change -NodePort on the bottom line

You will get warning for accessing https from the browser. (i will update the readme for securing https, but im so lazy right now)

Just prefer accept from the browser right now and select **Token **to login.

So how you login into dashboard kubernetes using token? This is how we doit.

Create Token Kubernetes (k8s) To Login.

kubectl create serviceaccount admin-dashboard
kubectl create clusterrolebinding admin-dashboard --clusterrole=cluster-admin --serviceaccount=default:admin-dashboard

And create this file using nano/vi.

nano token.yml

insert this line code YML.

apiVersion: v1
kind: Secret
metadata:
  name: admin-dashboard-token
  annotations:
    kubernetes.io/service-account.name: admin-dashboard
type: kubernetes.io/service-account-token

Save the file and Apply the file:

kubectl apply -f token.yml

Describe the token:

kubectl describe secret admin-dashboard-token -n default

Now you can login using that token.

Metric Server:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Edit deployment and fix issue error 500 probeness with –kubelet-insecure-tls

Leave a comment