K8S高可用集群安装指南
1、设置主机名
通过hostnamectl命令修改每台主机名,以下是参考示例:
hostnamectl set-hostname k8s-master-0主机名修改完毕后,在每台机器的hosts文件中都加入所有主机名与IP的映射关系,示例如下:
# 高可用浮动IP
10.16.10.60 vip-k8s-master
10.16.10.61 k8s-master-0
10.16.10.62 k8s-master-1
10.16.10.63 k8s-master-2
10.16.10.64 k8s-node-0
10.16.10.65 k8s-node-1
10.16.10.66 k8s-node-2
10.16.10.67 k8s-node-32、在所有控制节点上安装keepalived和haproxy
在每个控制节点上都执行以下命令进行安装:
yum install haproxy keepalived -y在k8s-master-0上创建/etc/keepalived/check_apiserver.sh脚本:
#!/bin/sh
APISERVER_VIP=10.16.10.60
APISERVER_DEST_PORT=6443
errorExit() {
echo "*** $*" 1>&2
exit 1
}
curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
添加执行权限:
chmod +x /etc/keepalived/check_apiserver.sh修改keepalived配置文件:
# 备份文件
cp /etc/keepalived/keepalived.conf{,.save}
# 清空文件
> /etc/keepalived/keepalived.conf将以下内容粘贴到/etc/keepalived/keepalived.conf文件中:
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 151
priority 255
authentication {
auth_type PASS
auth_pass P@##D321!
}
virtual_ipaddress {
10.16.10.60/24
}
track_script {
check_apiserver
}
}
注意: 将keepalived.conf复制到另外两台控制节点时将state修改成SLAVE,priority改成254和253。
修改haproxy配置文件:
# 备份文件
cp /etc/haproxy/haproxy.cfg{,.save}删除default块之后的全部内容,并添加以下内容:
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
bind *:8443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server k8s-master-0 10.16.10.61:6443 check
server k8s-master-1 10.16.10.62:6443 check
server k8s-master-2 10.16.10.63:6443 check
将keepalived和haproxy配置文件复制到其他两台服务器:
for f in k8s-master-1 k8s-master-2; do scp /etc/keepalived/check_apiserver.sh /etc/keepalived/keepalived.conf root@$f:/etc/keepalived; scp /etc/haproxy/haproxy.cfg root@$f:/etc/haproxy; done修改系统配置:
# 修改防火墙
firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
firewall-cmd --permanent --add-port=8443/tcp
firewall-cmd --reload
# 启用服务
systemctl enable keepalived --now
systemctl enable haproxy --now
# 修改控制节点防火墙
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=179/tcp
firewall-cmd --permanent --add-port=4789/udp
firewall-cmd --reload
modprobe br_netfilter
sh -c "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"
sh -c "echo '1' > /proc/sys/net/ipv4/ip_forward"
# 修改工作节点防火墙
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=179/tcp
firewall-cmd --permanent --add-port=4789/udp
firewall-cmd --reload
modprobe br_netfilter
sh -c "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"
sh -c "echo '1' > /proc/sys/net/ipv4/ip_forward"3、关闭swap
如果不关闭swap,kubernetes运行会出现错误, 即使安装成功了,node重启后也会出现kubernetes server运行错误。
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab4、安装kubeadm、kubelet、kubectl
添加软件源:
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF安装软件:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes在所有节点启用服务:
systemctl enable kubelet --now5、containerd配置
注释 disabled_plugins = ["cri"] ,增加私有镜像仓库配置:
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "swr.cq-region-2.sgic.sgcc.com.cn/k8s/pause:3.9"
[plugins.cri]
[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."swr.cq-region-2.sgic.sgcc.com.cn"]
endpoint = ["http://swr.cq-region-2.sgic.sgcc.com.cn"]sandbox image配置没有生效,集群在部署时总是去获取,此处暂时使用一个折中方案,在本地给镜像新增tag:
ctr -n k8s.io i tag swr.cq-region-2.sgic.sgcc.com.cn/k8s/pause:3.9 registry.k8s.io/pause:3.66、初始化集群
kubeadm init --control-plane-endpoint "vip-k8s-master:8443" --upload-certs --pod-network-cidr=10.244.10.0/16 --image-repository=swr.cq-region-2.sgic.sgcc.com.cn/k8s --v=5部署网络组件:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml在k8s-master-1和k8s-master-2节点上执行命令示例如下:
kubeadm join vip-k8s-master:8443 --token r3g4lg.808yznkgswngn1el \
--discovery-token-ca-cert-hash sha256:37501bfeab42b97b9a9a18728aea7a00fe3ba5a28c0207fc72180827612340c2 \
--control-plane --certificate-key 160ab777510190cd1576e7bdb6e24153d2e55acf6f2c6f972cc52eae364f560f
在工作节点上执行命令示例如下:
kubeadm join vip-k8s-master:8443 --token r3g4lg.808yznkgswngn1el \
--discovery-token-ca-cert-hash sha256:37501bfeab42b97b9a9a18728aea7a00fe3ba5a28c0207fc72180827612340c2
最后更新于