Kuberntes multimaster cluster with kubeadm, haproxy, keepalived

Alparslan Ozturk
3 min readJan 7, 2022

Follow this documentation to set up a highly available Kubernetes cluster using Debian GNU/Linux 11 (bullseye).

This documentation guides you in setting up a cluster with 3 master nodes, 1 worker node and 2 load balancer node using HAProxy. Additinaly I will configure keepalived & haproxy on master nodes for feaure included LB installation.

  • Password for the root account on all these virtual machines is parola
  • Perform all the commands as root user unless otherwise specified
  • keepalived monitor only haproxy service and cariers only vip ip(2.2.2.10) between LBs

Virtual IP managed by Keepalived on the load balancer nodes

If you want to try this in a virtualized environment on your workstation

  • Vmware Workstation installed
  • Vagrant installed
  • Vagrant Vmware provider & plugin installed
  • Microsoft Windows openSSH feature installed
  • Host machine has atleast 8 cores
  • Host machine has atleast 8G memory
net start vagrant-vmware-utility 
vagrant plugin install vagrant-vmware-desktop set VAGRANT_DEFAULT_PROVIDER=vmware_workstation echo %VAGRANT_DEFAULT_PROVIDER%
apt update && apt install -y keepalived haproxy

Append the below lines to /etc/keepalived/keepalived.conf

cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
script_user root
enable_script_security
}
vrrp_track_process check_haproxy {
process haproxy
quorum 1
delay 2
}
vrrp_instance VI_HAPROXY {
state MASTER # MASTER or BACKUP
interface ens34
virtual_router_id 51
priority 101 # 101 for master other 100
virtual_ipaddress {
2.2.2.10
}
track_process {
check_haproxy
}
}
EOF

Append the below lines to /etc/haproxy/haproxy.cfg

cat >> /etc/haproxy/haproxy.cfg <<EOF
listen stats
mode http
bind *:80
stats enable
stats uri /
listen kubernetes-api
mode tcp
bind *:6443
option tcplog
option httpchk GET /healthz HTTP/2
http-check expect status 200
option ssl-hello-chk
balance roundrobin
default-server check inter 2s fall 3 rise 2
server master1 2.2.2.21:6443
server master2 2.2.2.22:6443
server master3 2.2.2.23:6443
EOF

Kubernetes Setup

{
apt-get install -y ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update && apt-get -y install docker-ce docker-ce-cli containerd.io
}
{
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
}
apt-get update && apt-get install -y kubelet kubeadm kubectl && apt-mark hold kubelet kubeadm kubectlkubeadm init --control-plane-endpoint="loadbalancer.ornek.com:6443" --upload-certs --apiserver-advertise-address=2.2.2.21

Copy the commands to join other master nodes and worker nodes. check commands;

curl -isk https://2.2.2.10:6443/healthzcurl -isk https://2.2.2.11:6443/healthzcurl -sSL "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.IPALLOC_RANGE=10.32.0.0/12" > weave.yaml kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f weave.yaml

Use the respective kubeadm join commands you copied from the output of kubeadm init command on the first master.

IMPORTANT: You also need to pass — apiserver-advertise-address to the join command when you join the other master node.

On your host machine

mkdir ~/.kube 
scp root@2.2.2.101:/etc/kubernetes/admin.conf ~/.kube/config-vagrant export KUBECONFIG=~/.kube/config-vagrant

Password for root account is kubeadmin (if you used my Vagrant setup)

kubectl cluster-info 
kubectl get nodes
kubectl get cs
or
kubectl drain master1 --ignore-daemonsets --delete-emptydir-data kubectl delete node master1

Have Fun!!

Originally published at https://github.com.

--

--