锐单电子商城 , 一站式电子元器件采购平台!
  • 电话:400-990-0325

基于kubeadm10分钟搭建k8s集群指南

时间:2023-01-24 20:00:00 5tj5zk连接器

搭建集群背

环境要求:

  • A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.

  • 2 GB or more of RAM per machine (any less will leave little room for your apps).

  • 2 CPUs or more.

  • Full network connectivity between all machines in the cluster (public or private network is fine).

  • Unique hostname, MAC address, and product_uuid for every node. Seeherefor more details.

  • Certain ports are open on your machines. Seeherefor more details.

  • Swap disabled. YouMUSTdisable swap in order for the kubelet to work properly.

1.系统环境(2核44)GB):CentOS 7
master节点ip:10.229.1.168
node1工作节点ip:10.229.3.251
root执行权限账户操作
2.更改主机名称:

#(注意主机名hostname不可有-或者. 否则kubeadm init 时会报[nodeRegistration.name](): Invalid value: "k8s_master")  #master更改节点设置hostname  $ hostnamectl set-hostname k8s-master  #node1节点设置hostname  $ hostnamectl set-hostname k8s-node1    #master及node1节点都添加hosts  $ cat >>/etc/hosts << EOF  10.229.1.168 k8s-master  10.229.3.251 k8s-node1  EOF

3. 所有节点都执行安装内容
(1)关闭防火墙,selinux、swap

#关闭防火墙 $ systemctl status firewalld && systemctl stop firewalld && systemctl disable firewalld #关闭selinux $cat /etc/selinux/config $ sed -i 's/enforcing/disabled/'/etc/selinux/config    #永久关闭 $ setenforce 0 # 临时关闭 #关闭swap #swapoff -a  # 临时 $ vim /etc/fstab  # 永久 删除 /mnt/swap swap swap defaults 0 0 或者注释这一行

(2)桥接IPv4流量传递到iptables的链

#将桥接的IPv4流量传递到iptables的链: $ cat >/etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables =1 net.bridge.bridge-nf-call-iptables =1 EOF   #使生效 $ sysctl --system

(3)使用阿里yum源

#更换yum源 cat > /etc/yum.repos.d/kubernetes.repo <

(4)安装docker及 kubelet kubeadm kubectl

#Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker yum -y install docker #启动docker systemctl enable docker && systemctl start docker #检查docker安装情况 docker -v   配有镜像加速器 cat > /etc/docker/daemon.json << EOF {   "registry-mirrors": [""] } EOF #重启docker systemctl restart docker   #安装kubeadm,kubelet和kubectl yum install -y kubelet kubeadm kubectl #启动kubelet systemctl enable kubelet && systemctl start kubelet

master节点安装内容

1、使用kubeadm init安装

#这个命令可能无法拉取镜像(eg:网络被墙) $ kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version latest   #执行上述拉取失败的,执行下列命令,检查所需的镜像 $ kubeadm config images list   k8s.gcr.io/kube-apiserver:v1.22.1   k8s.gcr.io/kube-controller-manager:v1.22.1   k8s.gcr.io/kube-scheduler:v1.22.1   k8s.gcr.io/kube-proxy:v1.22.1   k8s.gcr.io/pause:3.5   k8s.gcr.io/etcd:3.5.0-0   k8s.gcr.io/coredns/coredns:v1.8.4   #如果上述命令不能拉下来,需要手动拉取镜像 $ docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.1 && \\ docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.1 && \\ docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.1 && \\ docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.22.1 && \\ docker pull registry.aliyuncs.com/google_containers/pause:3.5 && \\ docker pull registry.aliyuncs.com/google_containers/etcd:3.5.0-0 && \\ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.4   #修改镜像名称及标签 $ docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.1  k8s.gcr.io/kube-apiserver:v1.22.1 && \\ docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.1  k8s.gcr.io/kube-controller-manager:v1.22.1 && \\ docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.1  k8s.gcr.io/kube-scheduler:v1.22.1 && \\ docker tag registry.aliyuncs.com/google_continers/kube-proxy:v1.22.1  k8s.gcr.io/kube-proxy:v1.22.1 && \\
docker tag registry.aliyuncs.com/google_containers/etcd:3.5.0-0  k8s.gcr.io/etcd:3.5.0-0 && \\
docker tag registry.aliyuncs.com/google_containers/pause:3.5  k8s.gcr.io/pause:3.5 && \\
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4


#删除多余镜像
$ docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.1 \\
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.1 \\
registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.1 \\
registry.aliyuncs.com/google_containers/kube-proxy:v1.22.1 \\
registry.aliyuncs.com/google_containers/etcd:3.5.0-0 \\
registry.aliyuncs.com/google_containers/pause:3.5 \\
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.4 


#查看镜像获取情况
$ docker images 


#拉取镜像完毕重新初始化
$ kubeadm init --pod-network-cidr 10.244.0.0/16 --kubernetes-version latest
  • 见到以下输出表明已经安装成功

Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 172.16.1.197:6443 --token ebi9py.oz4hmt72yk1wlvoe \\
  --discovery-token-ca-cert-hash sha256:9990f921f6c66423fc097f81f2c4d5f2b851dc906cbce966db99de73dbce793b

2、执行以下命令,拷贝kubectl使用的连接k8s认证文件到默认路径

$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config
$ export KUBECONFIG=/etc/kubernetes/admin.conf

3、查看pods,此时pod为NotReady的状况,需要暗转网络插件,第四步会讲到

$ kubectl get nodes
k8s-master   NotReady   control-plane,master   2m54s   v1.22.1

node1节点安装内容

  • 安装完docker、kubeadm、kubectl、kubelet之后,加入Kubernetes Node

#node节点执行以下命令,拉取镜像
 
[root@k8s-node1 ~]# docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.22.1 && \\
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.22.1  k8s.gcr.io/kube-proxy:v1.22.1 && \\
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.22.1 
 
[root@k8s-node1 ~]#  docker pull registry.aliyuncs.com/google_containers/pause:3.5 && \\
docker tag registry.aliyuncs.com/google_containers/pause:3.5  k8s.gcr.io/pause:3.5 && \\
docker rmi registry.aliyuncs.com/google_containers/pause:3.5
 
#****** 查看 node1 节点需要安装的镜像列表,此三个必须安装 ***********#
[root@k8s-node1 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.22.1             36c4ebbc9d97        2 weeks ago         104 MB
quay.io/coreos/flannel   v0.14.0             8522d622299c        3 months ago        67.9 MB
k8s.gcr.io/pause         3.5                 ed210e3e4a5b        5 months ago        683 kB
 
[root@k8s-node1 ~]# kubeadm join 172.16.1.197:6443 --token ebi9py.oz4hmt72yk1wlvoe \\
  --discovery-token-ca-cert-hash sha256:9990f921f6c66423fc097f81f2c4d5f2b851dc906cbce966db99de73dbce793b
 
#执行以下命令出现问题,解决方案是拷贝master节点/etc/kubernetes/admin.conf配置 
[root@k8s-node1 ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
 
[root@k8s-node1 ~]# scp root@k8s-master:/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf
 
[root@k8s-node1 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
 
#重新执行,会显示正常
[root@k8s-node1 ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   NotReady    control-plane,master   2d17h   v1.22.1
k8s-node1    NotReady                     2d17h   v1.22.1
 
# 也可以考虑以下命令
[root@k8s-node1 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
[root@k8s-node1 ~]# source /etc/profile

flannel 网络插件安装

  • 节点的状态是 not ready,这个是因为我们并没有安装网络插件 (master及node1 节点上都需要安装)

#下载kube-flannel.yml
$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 
# 查看network的子网配置
[root@k8s-node1 ~]# cat kube-flannel.yml |grep -A6 "net-conf"
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
 
#此命令可能存在无法拉取镜像的情况
[root@k8s-node1 ~]#  kubectl apply -f kube-flannel.yml
 
#查看各服务状态及节点状况
[root@k8s-node1 ~]#  kubectl get pods -n kube-system -o wide
#或者使用
[root@k8s-node1 ~]#  kubectl get pods --all-namespaces
 
#进一步查看日志情况,通过log工具查看pod的日志内容是排查问题最常用,也是最方便的方法,可以很快定位到问题的根源。
[root@k8s-node1 ~]# kubectl logs kube-flannel-ds-wk9tj -n kube-system
I0905 12:53:00.709734       1 main.go:520] Determining IP address of default interface
I0905 12:53:00.710553       1 main.go:533] Using interface with name eth0 and address 172.16.0.188
I0905 12:53:00.710607       1 main.go:550] Defaulting external address to interface address (172.16.0.188)
W0905 12:53:00.710694       1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0905 12:53:01.214239       1 kube.go:116] Waiting 10m0s for node controller to sync
I0905 12:53:01.214324       1 kube.go:299] Starting kube subnet manager
..........
 
#或者使用此命令查看pod状态
$ kubectl describe pod kube-flannel-ds-wk9tj -n kube-system
#如果出现以下状况是因为镜像没有拉取,可以在node节点执行
#Warning  FailedCreatePodSandBox  2m22s               kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.5": Get  dial tcp 142.250.157.82:443: i/o timeout
 
#可以看到由于国内网络原因无法拉取镜像,手动拉取flannel镜像
#使用以下方法获取flannel镜像
$ wget https://github.com/flannel-io/flannel/releases/download/v0.14.0/flanneld-v0.14.0-amd64.docker


#加载到docker
$ docker load < flanneld-v0.14.0-amd64.docker && \\
docker tag quay.io/coreos/flannel:v0.14.0-amd64 quay.io/coreos/flannel:v0.14.0 && \\
docker rmi quay.io/coreos/flannel:v0.14.0-amd64
 
#重新安装
[root@k8s-node1 ~]# kubectl delete -f kube-flannel.yml
[root@k8s-node1 ~]# kubectl apply -f kube-flannel.yml
 
#查看进程是否启来
$ ps -ef|grep flannel
 
#安装完flannel等待几分钟,再查看服务状态
$ kubectl get pods -n kube-system -o wide  
#查看节点状况
$ kubectl get nodes

查看集群健康状况

1、集群健康检查

#查看controller manager和scheller状态
[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}

如果集群不健康

#如果集群不健康,更改以下两个文件,搜索 --port=0,把这一行注释掉
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
# 搜索port=0,把这一行注释掉
    #    - --port=0
[root@k8s-master ~]# systemctl restart kubelet
[root@k8s-master ~]# kubectl get cs

2、查看集群配置

#查看集群配置
[root@k8s-master ~]# kubectl get configmap -n kube-system
NAME                                 DATA   AGE
coredns                              1      2d18h
extension-apiserver-authentication   6      2d18h
kube-flannel-cfg                     2      2d16h
kube-proxy                           2      2d18h
kube-root-ca.crt                     1      2d18h
kubeadm-config                       1      2d18h
kubelet-config-1.22                  1      2d18h
 
[root@k8s-master ~]# kubectl get configmap kube-flannel-cfg -n kube-system -o yaml
 
#检查证书是否过期
[root@k8s-master ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
 
CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Sep 05, 2022 10:32 UTC   362d                                    no
apiserver                  Sep 05, 2022 10:32 UTC   362d            ca                      no
apiserver-etcd-client      Sep 05, 2022 10:32 UTC   362d            etcd-ca                 no
apiserver-kubelet-client   Sep 05, 2022 10:32 UTC   362d            ca                      no
controller-manager.conf    Sep 05, 2022 10:32 UTC   362d                                    no
etcd-healthcheck-client    Sep 05, 2022 10:32 UTC   362d            etcd-ca                 no
etcd-peer                  Sep 05, 2022 10:32 UTC   362d            etcd-ca                 no
etcd-server                Sep 05, 2022 10:32 UTC   362d            etcd-ca                 no
front-proxy-client         Sep 05, 2022 10:32 UTC   362d            front-proxy-ca          no
scheduler.conf             Sep 05, 2022 10:32 UTC   362d                                    no
 
CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Sep 03, 2031 10:32 UTC   9y              no
etcd-ca                 Sep 03, 2031 10:32 UTC   9y              no
front-proxy-ca          Sep 03, 2031 10:32 UTC   9y              no
 
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 172.16.1.197:6443 --token pr0pdb.fayae0jrwfr6wmkb --discovery-token-ca-cert-hash sha256:9990f921f6c66423fc097f81f2c4d5f2b851dc906cbce966db99de73dbce793b
 
#或者
[root@k8s-master ~]#  kubeadm token generate
#根据token输出添加工作节点命令
[root@k8s-master ~]#  kubeadm token create  --print-join-command --ttl=0

集群测试

1、master节点nginx测试

#安装
[root@k8s-master ~]# kubectl create deployment nginx-deploy --image=nginx
[root@k8s-master ~]# kubectl expose deployment nginx-deploy --port=80 --type=NodePort
 
#查看IP和端口号并测试
[root@k8s_master ~]# kubectl get pod,svc
NAME                               READY   STATUS    RESTARTS   AGE
pod/nginx-deploy-8588f9dfb-hqv4h   1/1     Running   0          5m
 
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes     ClusterIP   10.96.0.1               443/TCP        159m
service/nginx-deploy   NodePort    10.104.174.27           80:32353/TCP   4m49s
 
#查看pod的ip地址
[root@k8s-master ~]# kubectl get pods -n kube-system -o wide
NAME                                 READY   STATUS    RESTARTS        AGE     IP             NODE         NOMINATED NODE   READINESS GATES
coredns-78fcd69978-cjclt             1/1     Running   0               2d18h   10.244.0.3     k8s-master              
coredns-78fcd69978-vkwpn             1/1     Running   0               2d18h   10.244.0.2     k8s-master              
etcd-k8s-master                      1/1     Running   0               2d18h   172.16.1.197   k8s-master              
kube-apiserver-k8s-master            1/1     Running   0               2d18h   172.16.1.197   k8s-master              
kube-controller-manager-k8s-master   1/1     Running   0               2d15h   172.16.1.197   k8s-master              
kube-flannel-ds-d5h7g                1/1     Running   0               2d16h   172.16.1.197   k8s-master              
kube-flannel-ds-wk9tj                1/1     Running   8 (2d15h ago)   2d16h   172.16.0.188   k8s-node1               
kube-proxy-jrjjs                     1/1     Running   0               2d18h   172.16.1.197   k8s-master              
kube-proxy-pnnlq                     1/1     Running   0               2d18h   172.16.0.188   k8s-node1               
kube-scheduler-k8s-master            1/1     Running   0               2d15h   172.16.1.197   k8s-master              
 
#curl进行请求,注意使用master或者work node的地址,查看返回
[root@k8s_master ~]# curl 172.16.0.188:32353



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

[root@k8s-master ~]# curl 172.16.1.197:32353 Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

安装dashboard

1、注意兼容性

#kubectl 的版本信息是1.22.1
[root@k8s-master ~]# kubectl version
Client Version: [version.Info](){Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: [version.Info](){Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:39:34Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}

2、dashboard兼容性查询地址 https://github.com/kubernetes/dashboard/releases

3、安装dashboard

#安装
[root@k8s-master ~]# kubectl apply -f []()
 
#查看dashboard运行状态
[root@k8s-master ~]# kubectl get pod -n kubernetes-dashboard
 
#修改Dashboard,通过NodePort方式暴露端口,这里指定30001
[root@k8s-master ~]# kubectl patch svc kubernetes-dashboard \\
  -n kubernetes-dashboard \\
  -p '{"spec":{"type":"NodePort","ports":[{"port":443,"targetPort":8443,"nodePort":30001}]}}'

4、登录dashboard的地址为 https://10.229.1.168:30001 注意:chrome提示不是私密连接,解决办法是盲输入以下内容回车即可 thisisunsafe

5、创建账户

#创建dashboard-adminuser.yaml
[root@k8s-master ~]# cat > dashboard-adminuser.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard  
EOF
 
#创建登录用户
[root@k8s-master ~]# kubectl apply -f dashboard-adminuser.yaml
 
#查看admin-user账户的token
[root@k8s-master ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-4vbjr
Namespace:    kubernetes-dashboard
Labels:       
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 108fbc06-8eda-4e11-9b81-a598555465ce
 
Type:  kubernetes.io/service-account-token
 
Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImRGT2NkYUhpX2pOMHZOT2ZOeXlUTUhXNmstZkZIdmdFMzFvTzRKV2JSeGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTR2YmpyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxMDhmYmMwNi04ZWRhLTRlMTEtOWI4MS1hNTk4NTU1NDY1Y2UiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.idJV83hWmQTCJU9ikSjATiHXYosHHZlJYeeUZ31WHA_SyxHQGpVKGimRtPrTCSHdZ-RINNrGD0lRMJOQin7pazGLwip3haP8l5CyP7zK0YEj6mETboA0rrbEkD7BRwVB7Hip27XSvwP_nPgrghei2htiKKcS5N15ExuoOc1zgHi2QzvH5Qc76oINTdKje3PI-tQ7PyHtrgGOpZudVxEgykXxIGGhD7uE_UBPEGeSb26l_Nm9cfZMt_ebe9h87kqdQn4QeIG-bRK6klR-uWU7dzRngCit18OjUYzfG-_lZCdNAW6XfeIQWceV7EAGOarg_iKOT4RuQSHcbnZRQ1FPQg
ca.crt:     1099 bytes

6、使用以上token登录dashboard

05135dbacf918d84d0d324861a24071e.png

7、登录后首页

参考链接

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ https://blog.csdn.net/flying_monkey_1/article/details/118701275 https://blog.csdn.net/weixin_40039683/article/details/112886735 https://github.com/kubernetes/dashboard/releases

锐单商城拥有海量元器件数据手册IC替代型号,打造电子元器件IC百科大全!

相关文章