锐单电子商城 , 一站式电子元器件采购平台!
  • 电话:400-990-0325

2.二进制部署K8s集群

时间:2023-08-05 11:07:00 51tjl压接型矩形连接器

1 网段规划

主机节点网段

192.168.200.0/24 

Service网段

10.244.0.0/16 

Pod网段

10.96.0.0/16 

网段不能有冲突

2 集群资源配置

Master节点

4C8G * 3 

Node节点

4C8G * 3 

3 系统设置 – 所有节点

关闭Selinux

# 临时关闭 setenforce 0 # 永久关闭 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux 

关闭防火墙

iptables -F systemctl disable --now firewalld 

配置Hosts – 所有节点

192.168.31.101 k8s-master-01 192.168.31.102 k8s-master-02 192.168.31.103 k8s-master-03 192.168.31.104 k8s-node-01 192.168.31.105 k8s-node-02 192.168.31.106 k8s-node-03 192.168.31.100 k8s-master-lb # 若不是高可用集群,该IP为Master01的IP 

配置YUM源

# 阿里云yum源 curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo  # docker依赖工具 yum install -y yum-utils device-mapper-persistent-data lvm2  # 安装docker的yum源 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  # 配置k8s的yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF  # 修改阿里云yum源的域名 sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo  # 安装必要的工具 yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y 

关闭swap分区

swapoff -a && sysctl -w vm.swappiness=0 sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab 

同步配置时间

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm yum install ntpdate -y  # 加入到crontab */5 * * * * /usr/sbin/ntpdate time2.aliyun.com 

配置Limit

ulimit -SHn 65535  cat >> /etc/security/limits.conf<< EOF * soft nofile 65536 * hard nofile 131072 * soft nproc 65535 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimited EOF 

Master01节点到其他节点免密

ssh-keygen -t rsa for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03;do ssh-copy-id -i .ssh/id_rsa.pub $i;done  

下载所有会用到的配置文件

下载并安装所有源码文件: mkdir /data/src cd /data/src ; git clone https://github.com/dotbalo/k8s-ha-install.git 不能下载就下载:https://gitee.com/dukuan/k8s-ha-install.git  [root@temp k8s-ha-install]# git branch -a * master   remotes/origin/HEAD -> origin/master   remotes/origin/manual-installation   remotes/origin/manual-installation-v1.16.x   remotes/origin/manual-installation-v1.17.x   remotes/origin/manual-installation-v1.18.x   remotes/origin/manual-installation-v1.19.x   remotes/origin/manual-installation-v1.20.x   remotes/origin/manual-installation-v1.20.x-csi-hostpath   remotes/origin/manual-installation-v1.21.x   remotes/origin/manual-installation-v1.22.x   remotes/origin/manual-installation-v1.23.x   remotes/origin/manual-installation-v1.24.x   remotes/origin/master    [root@temp k8s-ha-install]# git checkout manual-installation-v1.23.x 

升级系统

yum update -y --exclude=kernel*

升级内核到4.19

mkdir -p /data/src
cd /data/src
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

# 从master01节点传到其他节点:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02 k8s-node03;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/data/src/; done

# 所有节点安装内核
cd /data/src/ && yum localinstall -y kernel-ml*

# 更新启动内核
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

# 检查默认内核是不是4.19
grubby --default-kernel

# 重启系统

# 检查重启后系统内核是不是4.19
uname -a

配置IPVS

# 安装ipvsadm
yum install ipvsadm ipset sysstat conntrack libseccomp -y

# 加载ipvs模块
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

# 修改ipvs配置
cat >>/etc/modules-load.d/ipvs.conf<

配置内核参数

cat < /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

sysctl --system

重启服务器

配置完内核后,重启服务器,保证重启后内核依旧加载
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

4 K8s组件和Runtime安装

安装Containerd作为Runtime – 所有节点

# 所有节点安装docker-ce-20.10:
yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
 -- 无需启动Docker,只需要配置和启动Containerd即可。                  

配置Containerd所需的模块

cat <

配置内核参数

cat <

修改Containerd配置文件

# 创建配置目录
mkdir -p /etc/containerd
# 生成默认配置文件
containerd config default | tee /etc/containerd/config.toml

  • 将Containerd的Cgroup改为Systemd
vim /etc/containerd/config.toml
  • 找到containerd.runtimes.runc.options,添加SystemdCgroup = true

在这里插入图片描述

  • 将sandbox_image的Pause镜像改成符合自己版本的地址
# 使用Containerd必须在这里修改
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
# 如图

启动Containerd,并配置开机自启动

systemctl daemon-reload
systemctl enable --now containerd

所有节点配置crictl客户端连接的运行时位置

cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
  • 配置完成测试ctr命令是否可用

5 安装高可用组件

安装HAProxy和KeepAlived

# 所有Master节点
yum install keepalived haproxy -y

修改Haproxy配置, 所有Master节点配置相同

  • apiserver 6443端口映射VIP16443端口
cat > /etc/haproxy/haproxy.cfg << EOF
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01 192.168.31.101:6443  check
  server k8s-master02 192.168.31.102:6443  check
  server k8s-master03 192.168.31.103:6443  check
EOF

配置KeepAlived

注意每个节点的IP和网卡配置不一样 ,需要修改

  • Master-01
cat > /etc/keepalived/keepalived.conf<<EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 mcast_src_ip 192.168.31.101 virtual_router_id 51 priority 101 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.31.100 } track_script { chk_apiserver } } EOF
  • Master-02
cat > /etc/keepalived/keepalived.conf<<EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 mcast_src_ip 192.168.31.102 virtual_router_id 51 priority 101 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.31.100 } track_script { chk_apiserver } } EOF
  • Master-03
cat > /etc/keepalived/keepalived.conf<<EOF ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 mcast_src_ip 192.168.31.103 virtual_router_id 51 priority 101 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.31.100 } track_script { chk_apiserver } } EOF
  • 所有master节点配置KeepAlived健康检查文件
cat >>/etc/keepalived/check_apiserver.sh<<EOF #!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EOF

# 加执行权限
chmod +x /etc/keepalived/check_apiserver.sh

配置haproxy和keepalived开机自启动,

systemctl daemon-reload
systemctl enable --now haproxy keepalived

检查Haproxy和keepalived的状态

ping 192.168.31.100
telnet 192.168.31.100 16443  

# 如果ping不通且telnet没有出现 ] ,则认为VIP不可用,需要排查keepalived的问题
# 比如防火墙和selinux,haproxy和keepalived的状态
# 所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
# 所有节点查看selinux状态, getenforce执行结果为Permissive,或者Disabled状态
# master节点查看监听端口:netstat -lntp, 

6 K8S及ETCD安装

1.查看需要的版本

# kubernetes
https://github.com/kubernetes/kubernetes/tree/release-1.23/CHANGELOG

# etcd
https://github.com/etcd-io/etcd/releases/

2.下载

# Kubernetes server 二进制包
wget https://dl.k8s.io/v1.23.6/kubernetes-server-linux-amd64.tar.gz

# etcd 二进制包
wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz

3.解压kubernetes安装文件

tar -xf kubernetes-server-linux-amd64.tar.gz  \
    --strip-components=3 -C /usr/local/bin \
    kubernetes/server/bin/kube{ 
        let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

4.解压etcd安装文件

 tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz \
     --strip-components=1 -C /usr/local/bin \
     etcd-v3.5.1-linux-amd64/etcd{ 
        ,ctl}

5.查看版本

[root@k8s-master-01 src]# kubelet --version
Kubernetes v1.23.6
[root@k8s-master-01 src]# etcdctl version
etcdctl version: 3.5.1
API version: 3.5

6.将组件发送到其他节点

MasterNodes='k8s-master-02 k8s-master-03'
WorkNodes='k8s-node-01 k8s-node-02 k8s-node-03'

for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{ 
        let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done

for NODE in $WorkNodes; do scp /usr/local/bin/kube{ 
        let,-proxy} $NODE:/usr/local/bin/ ; done

7.所有节点创建/usr/local/cni/bin目录

mkdir -p /usr/local/cni/bin

8.切换分支

# Master01节点切换到1.23.x分支
cd /data/src/k8s-ha-install
git checkout manual-installation-v1.23.x

7 生成证书

下载生成证书工具

wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

ETCD证书

  • 所有Master节点创建etcd证书目录
mkdir /etc/etcd/ssl -p
  • 所有节点创建kubernetes相关目录
mkdir -p /etc/kubernetes/pki
  • Master01节点生成etcd证书
cd /data/src/k8s-ha-install/pki
## 生成etcd CA证书和CA证书的key
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
# etcd-ca-csr.json: 证书签名请求文件,配置了一些域名、公司、单位等信息
cat etcd-ca-csr.json
{ 
        
  "CN": "etcd",
  "key": { 
        
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    { 
        
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": { 
        
    "expiry": "876000h"
  }
}
  • 生成Etcd客户端证书
cfssl gencert \
   -ca=/etc/etcd/ssl/etcd-ca.pem \
   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
   -config=ca-config.json \
   -hostname=127.0.0.1,k8s-master-01,k8s-master-02,k8s-master-03,192.168.31.101,192.168.31.102,192.168.31.103 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

# hostname: etcd节点主机名和ip地址, 如果需要扩容etcd则多预留几个主机名和ip

# 执行结果
2022/05/17 21:04:41 [INFO] generate received request
2022/05/17 21:04:41 [INFO] received CSR
2022/05/17 21:04:41 [INFO] generating key: rsa-2048
2022/05/17 21:04:41 [INFO] encoded CSR
2022/05/17 21:04:41 [INFO] signed certificate with serial number 374416837578176400852307196163272282609639254096
  • 将证书复制到其他节点
MasterNodes='k8s-master-02 k8s-master-03'
WorkNodes='k8s-node-01 k8s-node-02 k8s-node-03'

for NODE in $MasterNodes; do
     ssh $NODE "mkdir -p /etc/etcd/ssl"
     for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do
       scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
     done
 done

k8s组件证书

1 根证书生成

  • Master01生成kubernetes根证书,然后用根证书去申请其他组件的证书,ca为证书颁发机构
[root@k8s-master-01 pki]# cd /data/src/k8s-ha-install/pki
[root@k8s-master-01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
2022/05/17 21:16:11 [INFO] generating a new CA key and certificate from CSR
2022/05/17 21:16:11 [INFO] generate received request
2022/05/17 21:16:11 [INFO] received CSR
2022/05/17 21:16:11 [INFO] generating key: rsa-2048
2022/05/17 21:16:11 [INFO] encoded CSR
2022/05/17 21:16:11 [INFO] signed certificate with serial number 356110579670536937029012243009570774526723248422

[root@k8s-master-01 pki]# ll /etc/kubernetes/pki/
total 12
-rw-r--r-- 1 root root 1025 May 17 21:16 ca.csr
-rw------- 1 root root 1675 May 17 21:16 ca-key.pem
-rw-r--r-- 1 root root 1411 May 17 21:16 ca.pem

[root@k8s-master-01 pki]# cat ca-csr.json 
{ 
        
  "CN": "kubernetes",
  "key": { 
        
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    { 
        
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": { 
        
    "expiry": "876000h"
  }
}

2 生成apiserver证书

cfssl gencert   -ca=/etc/kubernetes/pki/ca.pem \
    -ca-key=/etc/kubernetes/pki/ca-key.pem \
    -config=ca-config.json \
    -hostname=10.244.0.1,192.168.31.100,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.31.101,192.168.31.102,192.168.31.103 \
    -profile=kubernetes apiserver-csr.json \
    | cfssljson -bare /etc/kubernetes/pki/apiserver

# 10.244.0.1是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.244.0.1,
# 如果不是高可用集群,192.168.31.100为Master01的IP

# 执行结果
2022/05/17 21:29:45 [INFO] generate received request
2022/05/17 21:29:45 [INFO] received CSR
2022/05/17 21:29:45 [INFO] generating key: rsa-2048
2022/05/17 21:29:45 [INFO] encoded CSR
2022/05/17 21:29:45 [INFO] signed certificate with serial number 44589346984861915136867175777203942181126965371
  • 生成apiserver的聚合证书。

聚合证书用于给第三方组件使用, 权限比较小

cfssl gencert \
    -initca front-proxy-ca-csr.json \
    | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 
# 执行结果
2022/05/17 21:38:09 [INFO] generating a new CA key and certificate from CSR
2022/05/17 21:38:09 [INFO] generate received request
2022/05/17 21:38:09 [INFO] received CSR
2022/05/17 21:38:09 [INFO] generating key: rsa-2048
2022/05/17 21:38:09 [INFO] encoded CSR
2022/05/17 21:38:09 [INFO] signed certificate with serial number 94744298379229994126568334765473831074836354862

cfssl gencert \
    -ca=/etc/kubernetes/pki/front-proxy-ca.pem \
    -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes   front-proxy-client-csr.json \
    | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
    
# 执行结果, 警告可忽略
2022/05/17 21:39:01 [INFO] generate received request
2022/05/17 21:39:01 [INFO] received CSR
2022/05/17 21:39:01 [INFO] generating key: rsa-2048
2022/05/17 21:39:01 [INFO] encoded CSR
2022/05/17 21:39:01 [INFO] signed certificate with serial number 363789832426414262439760026013971715042182270063
2022/05/17 21:39:01 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

3 controlle-manager证书配置

# 1. 生成controller-manager证书
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

# -ca 根证书公钥, 
# -ca-key 根证书私钥
# -config 证书签名请求文件
# manager-csr.json controller-manager的证书签名请求文件
# -bare 生成证书的存放路径与文件头
[root@k8s-master-01 pki]# ls -l /etc/kubernetes/pki/controller-manager*
-rw-r--r-- 1 root root 1082 May 17 21:57 /etc/kubernetes/pki/controller-manager.csr
-rw------- 1 root root 1679 May 17 21:57 /etc/kubernetes/pki/controller-manager-key.pem
-rw-r--r-- 1 root root 1501 May 17 21:57 /etc/kubernetes/pki/controller-manager.pem


# 2. 生成controlle-manager.kubeconfig, 用于访问apiserver
# 注意,如果不是高可用集群,192.168.31.100:16443改为master01的地址,16443改为apiserver的端口,默认是6443

# 2.1 set-cluster:设置一个集群项,设置apiserver地址,导入根证书
kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://192.168.31.100:16443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# --certificate ca根证书
# --embed-certs=true 导入证书到kubeconfig
# --server apiserver的vip和端口
# --kubeconfig kubeconfig的存放路径
# 文件内容
[root@k8s-master-01 pki]# cat /etc/kubernetes/controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVUG1DTStLMUtXa0hxMm1SYW1FY2dTYmlTVVNZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl5TURVeE56RXpNVEV3TUZvWUR6SXgKTWpJd05ESXpNVE14TVRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBhbWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpYTXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFNwMjl5N2JKZXdlcW5tNmNUOEFDTjJ2bzh2b3BhN0JVego3dkNEVHJLM1N5V0JuSzQ5YnFKRDljWElxMWpzTmZQTCtRU3V3eG1nU3JJbWF1S0Y2RkUvalhXbStWMWhjN0ZZCjlldlFKV09rSHVLVllXUkMzWG50R3M4MSs5bStxRlY0eFllT2xqVlBEQ2FOK3Ntb2RML1lvY3o0QjNDVzNMbFEKcElEaVRxK2xZRTlKZDljcDF4NkF2cVpxVWdrNUJsVzFYeGNDTEU3aGU1ZlpyQUlBakRIa05wM1NXV2R0YjNZZQowWS9HbHZSWldCZkVOaEdRcE9KM3dFVlVXM1hyN1I1K3JEK3huUlZydk4xUlE0bUhnYW1XbzBEMituNUFrQWRCCkpRTFh3QnF3VHREZmRnQ09ra05ZTVpUeWpFemozNUxGUVltenREMVlua2JVaUhybEQxMHhBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTUgpncmlqdm1BbW1oYXZTcjdtcTBURVdsQTgxakFmQmdOVkhTTUVHREFXZ0JTUmdyaWp2bUFtbWhhdlNyN21xMFRFCldsQTgxakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbitUQ2ZkdUVxTTk2QncycEk5aFZmMGpEWkh2bmRHeGwKRENwWFJZOXlrKzU1elUwZWhObWdhQ0gxREJXc2JQMWlQMUw1bjVnUXRDV0hjWVZZRkVMcVcrMFFYdEJVd21TMAp0VStnbkhiTy8xWWw0OG9oMG1YcmVPYnozQWJYeG5CRHRXQSt0amdtYy9ReDhhQkkzS1VQd1hUMjVqMnJycGNBCk56OWdwSHU4QmFEdGI1VWJ0K3JhaTZCTEtUVGQrNnZzZnozeG96ekNrSmdTcjYxeXFvd0V1RWFFQmlNY0hIOGUKZ0hNYm1UY216VTJuRmNiWUlmbk91RXVZSVVkQkxmWkhmM00vYUIreFdNdnM0OUVxQlFqNzloSEVkTERaK3BNSApPZkhwMGVoKzlMU0hLeXB6TWVEdUt3UC80U3crK0pmOXE0SWFENEdYaHMwV1cwL1lzRmpXZVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.31.100:16443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: { 
        }
users: null

# 2.2 设置一个环境项,一个上下文, 相当于配置了一个用户
kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# --cluster 集群项name
# --user 用户名
# --kubeconfig kubeconfig文件地址

# 文件内容
[root@k8s-master-01 pki]# cat /etc/kubernetes/controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVUG1DTStLMUtXa0hxMm1SYW1FY2dTYmlTVVNZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl5TURVeE56RXpNVEV3TUZvWUR6SXgKTWpJd05ESXpNVE14TVRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBhbWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpYTXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFNwMjl5N2JKZXdlcW5tNmNUOEFDTjJ2bzh2b3BhN0JVego3dkNEVHJLM1N5V0JuSzQ5YnFKRDljWElxMWpzTmZQTCtRU3V3eG1nU3JJbWF1S0Y2RkUvalhXbStWMWhjN0ZZCjlldlFKV09rSHVLVllXUkMzWG50R3M4MSs5bStxRlY0eFllT2xqVlBEQ2FOK3Ntb2RML1lvY3o0QjNDVzNMbFEKcElEaVRxK2xZRTlKZDljcDF4NkF2cVpxVWdrNUJsVzFYeGNDTEU3aGU1ZlpyQUlBakRIa05wM1NXV2R0YjNZZQowWS9HbHZSWldCZkVOaEdRcE9KM3dFVlVXM1hyN1I1K3JEK3huUlZydk4xUlE0bUhnYW1XbzBEMituNUFrQWRCCkpRTFh3QnF3VHREZmRnQ09ra05ZTVpUeWpFemozNUxGUVltenREMVlua2JVaUhybEQxMHhBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTUgpncmlqdm1BbW1oYXZTcjdtcTBURVdsQTgxakFmQmdOVkhTTUVHREFXZ0JTUmdyaWp2bUFtbWhhdlNyN21xMFRFCldsQTgxakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbitUQ2ZkdUVxTTk2QncycEk5aFZmMGpEWkh2bmRHeGwKRENwWFJZOXlrKzU1elUwZWhObWdhQ0gxREJXc2JQMWlQMUw1bjVnUXRDV0hjWVZZRkVMcVcrMFFYdEJVd21TMAp0VStnbkhiTy8xWWw0OG9oMG1YcmVPYnozQWJYeG5CRHRXQSt0amdtYy9ReDhhQkkzS1VQd1hUMjVqMnJycGNBCk56OWdwSHU4QmFEdGI1VWJ0K3JhaTZCTEtUVGQrNnZzZnozeG96ekNrSmdTcjYxeXFvd0V1RWFFQmlNY0hIOGUKZ0hNYm1UY216VTJuRmNiWUlmbk91RXVZSVVkQkxmWkhmM00vYUIreFdNdnM0OUVxQlFqNzloSEVkTERaK3BNSApPZkhwMGVoKzlMU0hLeXB6TWVEdUt3UC80U3crK0pmOXE0SWFENEdYaHMwV1cwL1lzRmpXZVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.31.100:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: ""
kind: Config
preferences: { 
        }
users: null

# 2.3 set-credentials 设置一个用户项, 配置用户使用哪个证书生成kubeconfig.导入controller-manager证书
kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# --client-certificate controller-manager证书公钥
# -client-key controller-manager证书私钥
# --embed-certs 是否导入证书
# --kubeconfig kubeconfig文件地址

# 文件内容
[root@k8s-master-01 pki]# cat /etc/kubernetes/controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVUG1DTStLMUtXa0hxMm1SYW1FY2dTYmlTVVNZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl5TURVeE56RXpNVEV3TUZvWUR6SXgKTWpJd05ESXpNVE14TVRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBhbWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpYTXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRFNwMjl5N2JKZXdlcW5tNmNUOEFDTjJ2bzh2b3BhN0JVego3dkNEVHJLM1N5V0JuSzQ5YnFKRDljWElxMWpzTmZQTCtRU3V3eG1nU3JJbWF1S0Y2RkUvalhXbStWMWhjN0ZZCjlldlFKV09rSHVLVllXUkMzWG50R3M4MSs5bStxRlY0eFllT2xqVlBEQ2FOK3Ntb2RML1lvY3o0QjNDVzNMbFEKcElEaVRxK2xZRTlKZDljcDF4NkF2cVpxVWdrNUJsVzFYeGNDTEU3aGU1ZlpyQUlBakRIa05wM1NXV2R0YjNZZQowWS9HbHZSWldCZkVOaEdRcE9KM3dFVlVXM1hyN1I1K3JEK3huUlZydk4xUlE0bUhnYW1XbzBEMituNUFrQWRCCkpRTFh3QnF3VHREZmRnQ09ra05ZTVpUeWpFemozNUxGUVltenREMVlua2JVaUhybEQxMHhBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTUgpncmlqdm1BbW1oYXZTcjdtcTBURVdsQTgxakFmQmdOVkhTTUVHREFXZ0JTUmdyaWp2bUFtbWhhdlNyN21xMFRFCldsQTgxakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbitUQ2ZkdUVxTTk2QncycEk5aFZmMGpEWkh2bmRHeGwKRENwWFJZOXlrKzU1elUwZWhObWdhQ0gxREJXc2JQMWlQMUw1bjVnUXRDV0hjWVZZRkVMcVcrMFFYdEJVd21TMAp0VStnbkhiTy8xWWw0OG9oMG1YcmVPYnozQWJYeG5CRHRXQSt0amdtYy9ReDhhQkkzS1VQd1hUMjVqMnJycGNBCk56OWdwSHU4QmFEdGI1VWJ0K3JhaTZCTEtUVGQrNnZzZnozeG96ekNrSmdTcjYxeXFvd0V1RWFFQmlNY0hIOGUKZ0hNYm1UY216VTJuRmNiWUlmbk91RXVZSVVkQkxmWkhmM00vYUIreFdNdnM0OUVxQlFqNzloSEVkTERaK3BNSApPZkhwMGVoKzlMU0hLeXB6TWVEdUt3UC80U3crK0pmOXE0SWFENEdYaHMwV1cwL1lzRmpXZVE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.31.100:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: ""
kind: Config
preferences: { 
        }
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVKakNDQXc2Z0F3SUJBZ0lVZmhZdWx1TnNWS0ZoQWNQNkRiUU5WejdCK29Rd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl5TURVeE56RXpOVE13TUZvWUR6SXgKTWpJd05ESXpNVE0xTXpBd1dqQ0JuekVMTUFrR0ExVUVCaE1DUTA0eEVEQU9CZ05WQkFnVEIwSmxhV3BwYm1jeApFREFPQmdOVkJBY1RCMEpsYVdwcGJtY3hKekFsQmdOVkJBb1RIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzClpYSXRiV0Z1WVdkbGNqRWFNQmdHQTFVRUN4TVJTM1ZpWlhKdVpYUmxjeTF0WVc1MVlXd3hKekFsQmdOVkJBTVQKSG5ONWMzUmxiVHByZFdKbExXTnZiblJ5YjJ4c1pYSXRiV0Z1WVdkbGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQgpCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFPRml6NjlTWU1CYTZmTUR0eWU5cVdTQ0dmWDd0Z0xYZ1FlTnlTWVp0Y0s2CnM4MWM4djhYWEQ4cW5qMnZudEVOZnRTcnl6VHBGem9oS2psUmdIUkRNTFV1Rkhuc244Tjl4d2ErNXMxMWtkMnQKdmpiYjJKekc3M3pNUGpNNXJxNEFsbGl3ZVZFeU1Hd0pyOWx4UW5YbXBOUmpPUHQ4Ti9FWUhJUmtzbEJuQVVzcQo4WDJmQVVIWkU2MENuZG8wZjRTQnNUR2QwUnFNNVJOY2NaVHJHYTJFUDNLUUZNbUZGbEk5T2hubGY4UmUveCtjCjQ5VTdBNk0rckk1eE1QZUlld2Jzd25vM2FvdDRMQ3dxR00rZnluaUh3ZjU4ZmlZQUw1aC9oU1A2SllqcDNqZFEKWlVVVWhZb1BZaDFManNFUktBWUluMVBqRnZiQXh6Vy80Q1ZWV2JGS1VvMENBd0VBQWFOL01IMHdEZ1lEVlIwUApBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CCkFmOEVBakFBTUIwR0ExVWREZ1FXQkJTWXBkazYrcUY1S3dWaWt3SlZJQzMwTjExWUpEQWZCZ05WSFNNRUdEQVcKZ0JTUmdyaWp2bUFtbWhhdlNyN21xMFRFV2xBODFqQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFrekEzNUpHcwphSFp2UlM0ZE03ZVZkVmVvWGEwVEFxUXlURVlnRkFNU3VudjVaUC9FZ2VKT1NJR2VpbmZ3VUM5WDFEZXdlV2l3CmFEQjhObkF0bTVDQW5kKzR6Z0FqRmZlM0RqY1pUZEdZL1V5Q1k1MDB3T2pDR2lMRlFCdVBncWFHa2ZWRFNxQnAKQm1JOVVWS2VpWDZRb3JraTV4akd5YlRQZWh2WmZ4TTRKZllwNlp3aXdnQWlOMTNRZVBpMXZ6blhFZms2TVdmUwpFTjhLNnpOUm9DVDRCTlVaNzk1K2tkQjd5Rm9LaEEvdWQ1a2h1dU02WlVhcHE0R3dCb2FZMWdEUXl1ODhaM1pwCjVmTnVobTJjSW9QSUIzdnYwbktjUjd2ejg4dERWaWtBQjJwcElwZnBRUVZOREh0Z0NXWEQ2aTlMV1BxL2RCb2wKQno0cWcyUzRrS1JrVHc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBNFdMUHIxSmd3RnJwOHdPM0o3MnBaSUlaOWZ1MkF0ZUJCNDNKSmhtMXdycXp6Vnp5Ci94ZGNQeXFlUGErZTBRMSsxS3ZMTk9rWE9pRXFPVkdBZEVNd3RTNFVlZXlmdzMzSEJyN216WFdSM2EyK050dlkKbk1idmZNdytNem11cmdDV1dMQjVVVEl3YkFtdjJYRkNkZWFrMUdNNCszdzM4UmdjaEdTeVVHY0JTeXJ4Zlo4QgpRZGtUclFLZDJqUi9oSUd4TVozUkdvemxFMXh4bE9zWnJZUS9jcEFVeVlVV1VqMDZHZVYveEY3L0g1emoxVHNECm96NnNqbkV3OTRoN0J1ekNlamRxaTNnc0xDb1l6NS9LZUlmQi9ueCtKZ0F2bUgrRkkvb2xpT25lTjFCbFJSU0YKaWc5aUhVdU93UkVvQmdpZlUrTVc5c0RITmIvZ0pWVlpzVXBTalFJREFRQUJBb0lCQVFDUEhCdTlPZlJmRHhzUAppQ05xNDMzRWFPdXRDaGZHOUNsa3IzMnlhSTdGeDZEVlhCaWJLcTBUR3ErQmdacTVLUFdJZWxDOEZ1ajlxd05SCmc5T3BmdVJWbHAyLzBTU2NqNmVwTzl2M1I1aklCa01LT0V2eE9Fdm1sUlZGbDNHMzRIMldjTytIUS9RRkZaMkgKMXVlWlQwc0g1THpReWs0SEV3VkFkMlBWczZIWnBJaUtXajExTUE5QklhV0xOZmxkOXE3dkZSTDJPWmtXcGZPSQpZM2hyWXMranMzVUNwQmM4V0RWTkFlR3YyZVJTUTVKd2VQZzRaZVcyd3VRR1dNa2YwaUk3aXQ1RDN5WExPdmdaClhhOXlDU3hUd0JpaXZvQ3o2K2FmVXZnS3M4VWVPZG1Jc1VxL1BIZlpWZkdKckRreWtjMkdvL0szb0c0MTJPYjQKc28wWTFpRkJBb0dCQVBuL2VxYUFUWnZWVzlSOEZNMGVqTjRmMzBtYUZpeGNHODFTUWk5L1lKTStqNHo0VmxsTgpXKytUdW1QWCs3akx5dUluMDZGd25OUGN6RjFaTzB2UWkvT1Q2UjZqb1VweTYzclg4V1I2VFA2OVAvb1Ivc3BSCnJOZkxiamRiMEo5bG9qME5WeG9icjQ1OE1tTWN6MHBxZzFWK3ozVUFLdjQvakUrcm5ScHBzUHp4QW9HQkFPYk0KRUVuQmNYU0FMeDhTTE5WMVhkbkxlSVVuOVcvYmFjd0JnVGxHd2dkNFROV3B0NlRldmRWbkJyZFhVU1RaQXBpbApNZFUrSGZ4RnlOd1p0RUE0QUN4bm16bzlLR3BNMWp3L2o4Z000UzZzamRrZmtmbXlBcWNUWUh1MWw2NUt2UEVQCitaYkRGY0hLT0JRWnY0bTNDY2drRWhTRzV2Ulpld0JwaEx0ME1WOWRBb0dBV2hyWDFRMG5hOFJCdmRzZkVETXoKcUplcVBmZjRoL0tHM3NFSU0rQmdLWklCNFZoY253RS80cURITEZkYlZlYTE4RDlVaXJwdysvZDMvU2s5TXYwdQpoQk5La1kxK2c3dlozY1BaTUZMWVQzUmNpOEJTcWc4NEVlc3poV1psVWg0cWxJQ3JaVENYWE82c3BvWnF2REtaCnRZWG9OZzVpY0pMcytvWXJNS3JwYkxFQ2dZRUE1SUVwcHp5RkRlbFR2aG1LbGhUTHhMUzFNSEN0aWYvY3NZVFQKNGxkeUIxOU9BMFV6YzJLczVMcEtaZjluY1dvQ0xndHdXVVpVL2M1QjNkajlJNC9PYkNod0FhdEhkbWQ0dk5IWgpreUZkV1k2eUtrUWRqUEIzdTk5dGFVNFRUUmJtRm0zUW1UbXhNdHI1eHJ6dmJIUHlsVnRSSTAybElFdnZnaXIyCnBYbVc0R2tDZ1lFQW4yQTl2ZDd0RTRKSC9kRWc1aVZHZFFKR1pKazQzWWcvdW1HcHM4Mk9laXhxSkh1bVJGb0sKUnNteDZtRE5FeFdHbG1sc3JuaGEzMTlVTjlqQlM3WGFtSEQvMW5reUtxVDArOEplQXJ5MWZlQXkvWmJ0Y3luaApPM29CbUs2TkpxQmJJWjk4MllKNkZ3d2RveTJ1NDRHQ0wvb2pOQjlSYU9Nd0Nwb1krQk5lV0J3PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=


# 2.4 使用某个环境当做默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 执行完后controller-manager.kubeconfig文件中current-context值为system:kube-controller-manager@kubernetes

4 scheduler证书配置

# 1 生成scheduler证书
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler

# 2 生成scheduler.kubeconfig, 过程和参数与controller-manager类似
# 2.1 配置apiserver地址, 导入根证书
kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://192.168.31.100:16443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

# 2.2 配置用户项,导入scheduler证书
kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

# 2.3 配置上下文, 设置用户
kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

# 2.4 使用某个环境当做默认环境
kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

5 管理员证书配置

用于kubectl管理集群使用的证书

# 1 生成admin证书
cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# 2 生成admin.kubeconfig
# 注意,如果不是高可用集群,192.168.31.100:16443改为master01的地址,16443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true  

相关文章