使用kubeadm部署高可用 k8s 1.9.2
时间:2022-12-15 17:00:01
高可用 k8s 1.9.2安装:
节点信息:
主机名称 IP 备注
docker09 10.211.121.9 master和etcd
docker10 10.211.121.10 master和etcd
docker22 10.211.121.22 master和etcd
vip-keepalive 10.211.121.102 vip用于高可用
一、系统初始化
1、优化yum源:
sudo rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
cat <
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
#关闭swap
swapoff -a
2、升级os内核
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y
grub2-set-default 0
reboot
3、节点ssh互信配置(省略)
二、安装docker,目前k8s最高支持 docker17-03 版本
#安装docker
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast
指定版本的安装:
https://yq.aliyun.com/articles/110806
yum install -y https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum install -y docker-ce-17.03.2.ce-1.el7.centos
#解决/var分区太小的问题
mkdir -p /data0/docker/var && ln -s /data0/docker/var /var/lib/docker
私仓和镜像加速
mkdir /etc/docker/
echo "
{
"storage-driver": "overlay2",
"storage-opts": [ "overlay2.override_kernel_check=true" ],
"registry-mirrors": ["https://pej3ico7.mirror.aliyuncs.com"],
"insecure-registries":["10.211.121.26:5000","10.211.121.9:5000"],
"live-restore" : false
}
" >> /etc/docker/daemon.json
systemctl start docker
三、安装k8s
高可用集群部署官方文件:
https://kubernetes.io/docs/setup/independent/high-availability/
开始之前:
1、由于网络原因,无法在网上下载镜像等,离线下载相应的安装包和镜像,已下载,当然,最好×××要解决各种问题,需要解决×××可私聊我:
链接:https://pan.baidu.com/s/1dzQyiq 密码:dyvi
#安装kubelet 、kubectl 、cni
cd k8s192 && yum localinstall *rpm
#load镜像
for i in ls *tar
;do docker load < $i ;done
2、修改cgroup-driver为cgroupfs:
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
sed -i 's/systemd/cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
3、命令补全:
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
Setting up an HA etcd cluster
https://kubernetes.io/docs/setup/independent/high-availability/
参考官方文件,Create etcd CA certs
2、Run etcd
选择systemd在物理机器上部署非容器模式。etcd 软件包需要×××下载。
官方文件有问题:
ExecStart=/usr/local/bin/etcd --name ${PEER_NAME}
这里的 PEER_NAME 变量 必须和下面的一样
--initial-cluster
第一个etcd启动时,等待其他节点启动,等待其他节点启动,etcd才能正常。
检查etc正常状态如下:
etcdctl --endpoints=https://10.211.121.9:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.pem --cert-file=/etc/kubernetes/pki/etcd/server.pem --key-file=/etc/kubernetes/pki/etcd/server-key.pemcluster-health
member 6ebda37987af36d is healthy: got healthy result from https://10.211.121.22:2379
member 15f530c6e1580621 is healthy: got healthy result from https://10.211.121.9:2379
member a43675f5f779e638 is healthy: got healthy result from https://10.211.121.10:2379
cluster is healthy
3、Set up master Load Balancer
这里选择 on-site ,用keepalived 构建三个节点。
4、Run kubeadm init on master0
使用kubeadm 初始化master节点
#以防万一,初始化kubeadm清除安装前可能留下的痕迹
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.211.121.102
etcd:
endpoints:
- https://10.211.121.9:2379
- https://10.211.121.10:2379
-
https://10.211.121.22:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
kubernetesVersion: 1.9.2
networking:
podSubnet: 10.244.0.0/16
apiServerCertSANs:- 10.211.121.9
- 10.211.121.10
- 10.211.121.22
- ocker09
- docker10
- docker22
apiServerExtraArgs:
endpoint-reconciler-type: lease
advertiseAddress: 10.211.121.102 是VIP 地址。
podSubnet: 10.244.0.0/16 选择和flannel 组件部署的同一个子网 ,否则会失败。
(必须指定版本,否则会远程拉取镜像最终失败)
执行:
kubeadm init --config=/etc/kubernetes/config.yaml
把生成的密钥文件,同步到其他master节点:
scp root@
rm apiserver.
5、在其他master节点,同样执行
kubeadm init --config=/etc/kubernetes/config.yaml
6、安装网络模块
这里使用flannel网络模块,覆盖网络。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
或者:
kubectl apply -f https://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/
7、 为了测试我们把master 设置为 可部署role
默认情况下,为了保证master的安全,master是不会被调度到app的。你可以取消这个限制通过输入:
kubectl taint nodes --all node-role.kubernetes.io/master-
至此,部署完成,3个master节点。
kubectl get node
NAME STATUS ROLES AGE VERSION
docker09 Ready master 33m v1.9.2
docker10 NotReady master 31m v1.9.2
docker22 Ready master 30m v1.9.2
参考文档:
https://kubernetes.io/docs/setup/independent/high-availability/
https://www.kubernetes.org.cn/3808.html