K8S云原生-高可用集群部署V1.28.2

news/2024/11/19 1:53:14/文章来源:https://www.cnblogs.com/tianxiang2046/p/18344521

一、环境准备

K8S集群角色 IP 主机名 安装相关组件
master 10.1.16.160 hqiotmaster07l apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico
master 10.1.16.161 hqiotmaster08l apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico
master 10.1.16.162 hqiotmaster09l apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico
worker 10.1.16.163 hqiotnode12l kubelet、kube-porxy、docker、calico、coredns、ingress-nginx
worker 10.1.16.164 hqiotnode13l kubelet、kube-porxy、docker、calico、coredns、ingress-nginx
worker 10.1.16.165 hqiotnode14l kubelet、kube-porxy、docker、calico、coredns、ingress-nginx
vip 10.1.16.202   nginx、keeplived

1.1、服务器环境初始化

# 控制节点、工作节点都需要安装
# 1、修改主机名:对应主机名修改
hostnamectl set-hostname master && bash# 2、添加hosts
cat << EOF >  /etc/hosts 
10.1.16.160 hqiotmaster07l
10.1.16.161 hqiotmaster08l
10.1.16.162 hqiotmaster09l
10.1.16.163 hqiotnode12l
10.1.16.164 hqiotnode13l
10.1.16.165 hqiotnode14l
EOF

# 3、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 4、关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config

# 5
关闭交换分区 swapoff -a # 临时关闭 永久关闭
vi /etc/fstab
#注释这一行:/mnt/swap swap swap defaults 0 0
free -m
查看swap是否全为0


# 6、每台机器都设置 时间同步
yum install chrony -y
systemctl start chronyd && systemctl enable chronyd
chronyc sources

 

# 7、创建/etc/modules-load.d/containerd.conf配置文件:
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
执行以下命令使配置生效:
modprobe overlay
modprobe br_netfilter

 

# 8、将桥接的IPv4流量传递到iptables的链
cat << EOF > /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF

 

# 9、配置服务器支持开启ipvs的前提条件(如果用istio,请不要开启IPVS模式)

接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

yum install -y ipset ipvsadm


由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在各个服务器节点上执行以下脚本:

cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

赋权:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。

 

如果报错modprobe: FATAL: Module nf_conntrack_ipv4 not found.

这是因为使用了高内核,较如博主就是使用了5.2的内核,一般教程都是3.2的内核。在高版本内核已经把nf_conntrack_ipv4替换为nf_conntrack了。所以正确配置应该如下

在各个服务器节点上执行以下脚本:

cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

赋权:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

 

# 10、生效sysctl
sysctl --system 

二、基础软件包安装

yum install -y gcc gcc-c++ makeyum install wget net-tools vim* nc telnet-server telnet curl openssl-devel libnl3-devel net-snmp-devel zlib zlib-devel pcre-devel openssl openssl-devel# 修改linux命令历史记录、ssh关闭时间
vi /etc/profile
HISTSIZE=3000
TMOUT=3600退出保存,执行
source /etc/profile

三、Docker安装

安装yum的工具包集合
yum install -y yum-utils安装docker仓库
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

卸载docker-ce
yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engineyum list installed | grep docker
yum remove -y docker-ce.x86_64rm -rf /var/lib/docker
rm -rf /etc/docker/查看可安装版本
yum list docker-ce --showduplicates | sort -r
安装最新版本
yum -y install docker-ce安装特定版本的docker-ce:
yum -y install docker-ce-23.0.3-1.el7启动docker,并设为开机自启动
systemctl enable docker && systemctl start docker/etc/docker上传daemon.json
systemctl daemon-reload
systemctl restart docker.service
docker infodocker相关命令:
systemctl stop docker
systemctl start docker
systemctl enable docker
systemctl status docker
systemctl restart docker
docker info
docker --version
containerd --version

四、containerd安装

下载Containerd的二进制包:
可先在网络可达的机器上下载好,再上传到服务器wget https://github.com/containerd/containerd/releases/download/v1.7.14/cri-containerd-cni-1.7.14-linux-amd64.tar.gz

压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了systemd配置文件,containerd以及cni的部署文件。 
将解压缩到系统的根目录中:
tar -zvxf cri-containerd-cni-1.7.14-linux-amd64.tar.gz -C /注意经测试cri-containerd-cni-1.7.14-linux-amd64.tar.gz包中包含的runc在CentOS 7下的动态链接有问题,
这里从runc的github上单独下载runc,并替换上面安装的containerd中的runc:
wget https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.amd64

install -m 755 runc.amd64 /usr/sbin/runc
runc -vrunc version 1.1.10
commit: v1.1.10-0-g18a0cb0f
spec: 1.0.2-dev
go: go1.20.10
libseccomp: 2.5.4接下来生成containerd的配置文件:
rm -rf /etc/containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml根据文档 Container runtimes 中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上containerd的cgroup driver为systemd。
修改前面生成的配置文件/etc/containerd/config.tomlsed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]...[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]SystemdCgroup = true# 设置aliyun地址,不设置会连接不上
sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri"]...# sandbox_image = "k8s.gcr.io/pause:3.6"sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"# 设置Harbor私有仓库
vi /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.configs."10.1.1.167".tls]insecure_skip_verify = true[plugins."io.containerd.grpc.v1.cri".registry.configs."10.1.1.167".auth]username = "admin"password = "Harbor12345"[plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]endpoint = ["https://registry.aliyuncs.com/google_containers"][plugins."io.containerd.grpc.v1.cri".registry.mirrors."10.1.1.167"]endpoint = ["https://10.1.1.167"]# 配置containerd开机启动,并启动containerd
systemctl daemon-reload
systemctl enable --now containerd && systemctl restart containerd# 使用crictl测试一下,确保可以打印出版本信息并且没有错误信息输出:
crictl versionVersion:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.7.14
RuntimeApiVersion:  v1

五、安装配置kubernetes

5.1 kubernetes高可用方案

为了能很好的讲解Kubernetes集群的高可用配置,我们可以通过一下方案来解答。

在这个方案中,我们通过keepalive+nginx实现k8s apiserver组件高可用。

按照旧的方案,我们以某一个master节点作为主节点,让其余的两个master节点加入,是无法达到集群的高可用的。一旦主master节点宕机,整个集群将处于不可用的状态。

5.2 通过keepalive+nginx实现k8s apiserver高可用

三台master节点,Nginx安装与配置

yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-develtar -zvxf nginx-1.27.0.tar.gzcd nginx-1.27.0全量安装
./configure --prefix=/usr/local/nginx --with-stream --with-http_stub_status_module --with-http_ssl_modulemake & make installln -s /usr/local/nginx/sbin/nginx /usr/sbin/nginx -vcd /usr/local/nginx/sbin/
#启动服务
./nginx
#停止服务
./nginx -s stop
#查看80端口
netstat -ntulp |grep 80建立服务,启动服务方式
vi /usr/lib/systemd/system/nginx.service[Unit]
Description=nginx - high performance web server
After=network.target remote-fs.target nss-lookup.target[Service]
Type=forking
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/usr/local/nginx/sbin/nginx -s stop[Install]
WantedBy=multi-user.target上传nginx.service 到  /usr/lib/systemd/system
systemctl daemon-reload
systemctl start nginx.service && systemctl enable nginx.service
systemctl status nginx.service修改nginx 配置文件#user  nobody;
worker_processes  1;#error_log  logs/error.log;
error_log  /var/log/nginx/error.log;
#error_log  logs/error.log  info;pid        /var/log/nginx/nginx.pid;events {worker_connections  1024;
}stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 10.1.16.169:6443 weight=5 max_fails=3 fail_timeout=30s;   server 10.1.16.170:6443 weight=5 max_fails=3 fail_timeout=30s;server 10.1.16.171:6443 weight=5 max_fails=3 fail_timeout=30s;   }server { listen 16443; # 由于 nginx 与 master 节点复用,这个监听端口不能是 6443,否则会冲突 proxy_pass k8s-apiserver; }}http {include       mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;sendfile            on;tcp_nopush          on;tcp_nodelay         on;keepalive_timeout   65;types_hash_max_size 4096;#gzip  on;server {listen       8080;server_name  localhost;location / {root   html;index  index.html index.htm;}error_page  404              /404.html;# redirect server error pages to the static page /50x.html#error_page   500 502 503 504  /50x.html;location = /50x.html {root   html;}}
}重启nginx.service
systemctl restart nginx.service
三台master节点,Keepalived安装与配置yum install -y curl gcc openssl-devel libnl3-devel net-snmp-develyum install -y keepalivedcd /etc/keepalived/mv keepalived.conf keepalived.conf.bakvi /etc/keepalived/keepalived.conf# master节点1配置
! Configuration File for keepalivedglobal_defs {router_id NGINX_MASTER
}vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1
}vrrp_instance VI_1 {state MASTERinterface ens192   # 网卡名称        mcast_src_ip 10.1.16.160 # 服务器IP   virtual_router_id 51   #vrrp路由ID实例,每个实例唯一priority 100       # 权重nopreemptadvert_int 2   # 指定vrrp心跳包通告间隔时间,默认1sauthentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {10.1.16.202/24   # 虚拟VIP}track_script {chk_apiserver   # 健康检查脚本}
}# master节点2配置
! Configuration File for keepalived
global_defs {router_id LVS_DEVELscript_user rootenable_script_security
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1
}
vrrp_instance VI_1 {state BACKUP1interface ens192mcast_src_ip 10.1.16.161virtual_router_id 51priority 99nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {10.1.16.202/24}track_script {chk_apiserver}
}# master节点2配置
! Configuration File for keepalived
global_defs {router_id LVS_DEVELscript_user rootenable_script_security
}
vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5 weight -5fall 2rise 1
}
vrrp_instance VI_1 {state BACKUP2interface ens192mcast_src_ip 10.1.16.162virtual_router_id 51priority 98nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {10.1.16.202/24}track_script {chk_apiserver}
}#  健康检查脚本
vi  /etc/keepalived/check_apiserver.sh#!/bin/bash
err=0
for k in $(seq 1 3)
docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfi
done
if [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1
elseexit 0
fi赋权:
chmod 644 /etc/keepalived/check_apiserver.sh
chmod 644 /etc/keepalived/keepalived.conf启动:
systemctl daemon-reload
systemctl start keepalived && systemctl enable keepalived
systemctl restart keepalived
systemctl status keepalived# 查看VIP,在master上看
[root@master nginx]# ip addr
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:50:56:9d:e5:7a brd ff:ff:ff:ff:ff:ffaltname enp11s0inet 10.1.16.160/24 brd 10.1.16.255 scope global noprefixroute ens192valid_lft forever preferred_lft foreverinet 10.1.16.202/24 scope global secondary ens192valid_lft forever preferred_lft foreverinet6 fe80::250:56ff:fe9d:e57a/64 scope link noprefixroutevalid_lft forever preferred_lft forever测试:停止master的nginx就会发现10.1.16.202这个IP漂移到master2服务器上,重启master的nginx和keepalived后,IP还会漂移回master

5.3 使用kubeadm部署Kubernetes

# 下面在各节点安装kubeadm和kubelet,创建yum源:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast# 如已经安装了相关组件,建议先彻底删除# 重置kubernetes服务,重置网络。删除网络配置,link
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig docker0 down
ip link delete cni0
systemctl start docker
systemctl start kubelet# 删除kubernetes相关软件
yum -y remove kubelet kubeadm kubectl
rm -rvf $HOME/.kube
rm -rvf ~/.kube/
rm -rvf /etc/kubernetes/
rm -rvf /etc/systemd/system/kubelet.service.d
rm -rvf /etc/systemd/system/kubelet.service
rm -rvf /usr/bin/kube*
rm -rvf /etc/cni
rm -rvf /opt/cni
rm -rvf /var/lib/etcd
rm -rvf /var/etcd# 查看kubelet kubeadm kubectl版本
yum list kubelet kubeadm kubectl  --showduplicates | sort -r# 安装k8s软件包,master和node都需要
yum install -y  kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2systemctl daemon-reload
systemctl enable kubelet && systemctl start kubeletkubernetes相关命令:
systemctl enable kubelet
systemctl restart kubelet
systemctl stop kubelet
systemctl start kubelet
systemctl status kubelet
kubelet --version注:每个软件包的作用
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的,kubeadm 安装k8s,k8s 控制节点和工作节点的组件,都是基于 pod 运行的,只要 pod 启动,就需要 kubelet 
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

5.4 kubeadm 初始化

使用kubeadm config print init-defaults --component-configs KubeletConfiguration可以打印集群初始化默认的使用的配置:从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。
基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml# 新建kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
# localAPIEndpoint:
#   advertiseAddress: 10.1.16.160
#   bindPort: 6443
nodeRegistration:criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresenttaints: null
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.2
controlPlaneEndpoint: 10.1.16.202:16443  # 控制平面使用虚拟IP
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 20.244.0.0/16  # 指定pod网段
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs这里定制了imageRepository为阿里云的registry,避免因gcr被墙,无法直接拉取镜像。criSocket设置了容器运行时为containerd。 同时设置kubelet的cgroupDriver为systemd,设置kube-proxy代理模式为ipvs。
在开始初始化集群之前可以使用kubeadm config images pull --config kubeadm.yaml预先在各个服务器节点上拉取所k8s需要的容器镜像。# 拉取所k8s需要的容器镜像
kubeadm config images pull --config kubeadm.yaml[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1# 如果出现无法下载的问题,可以线下导出导入
ctr -n k8s.io image export kube-proxy.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.28.2
ctr -n k8s.io image import kube-proxy.tar# 使用kubeadm初始化集群
kubeadm init --config kubeadm.yaml# 查看初始化结果
[root@HQIOTMASTER10L yum.repos.d]# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hqiotmaster10l kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.16.169 10.1.16.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hqiotmaster10l localhost] and IPs [10.1.16.160 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hqiotmaster10l localhost] and IPs [10.1.16.160 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0715 16:18:15.468503   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0715 16:18:15.544132   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
W0715 16:18:15.617290   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0715 16:18:15.825899   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.523308 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node hqiotmaster10l as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node hqiotmaster10l as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
W0715 16:18:51.448813   67623 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 10.1.16.202:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:0cc00fbdbfaa12d6d784b2f20c36619c6121a1dbd715f380fae53f8406ab6e4c \--control-planeThen you can join any number of worker nodes by running the following on each as root:kubeadm join 10.1.16.202:16443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:0cc00fbdbfaa12d6d784b2f20c36619c6121a1dbd715f380fae53f8406ab6e4c上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 
# 其中有以下关键内容:• [certs]生成相关的各种证书• [kubeconfig]生成相关的kubeconfig文件• [kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"• [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod• [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到• [addons]安装基本插件:CoreDNS, kube-proxy# 配置使用kubectl访问集群:
rm -rvf $HOME/.kube
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config# 查看一下集群状态,确认个组件都处于healthy状态
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                           STATUS             MESSAGE                         ERROR
scheduler                      Healthy            ok                              
controller-manager             Healthy            ok                              
etcd-0                         Healthy            {"health":"true","reason":""}   # 验证 kubectl
[root@k8s-master-0 ~]# kubectl get nodes
NAME             STATUS     ROLES           AGE     VERSION
hqiotmaster07l   NotReady   control-plane   2m12s   v1.28.2

5.5 扩容k8s集群,添加master

# 1. 从节点拉取镜像
# 将kubeadm.yaml传送到master2、master3,提前拉取所需镜像
kubectl config images pull --config=kubeadm.yaml# 2.将master节点证书拷贝到其余master节点
mkdir -p /etc/kubernetes/pki/etcd/scp /etc/kubernetes/pki/ca.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.* master3:/etc/kubernetes/pki/scp /etc/kubernetes/pki/sa.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* master3:/etc/kubernetes/pki/scp /etc/kubernetes/pki/front-proxy-ca.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* master3:/etc/kubernetes/pki/scp /etc/kubernetes/pki/etcd/ca.* master2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.* master3:/etc/kubernetes/pki/etcd/# 3.在master主节点生成token
[root@master etcd]# kubeadm token create --print-join-command
kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918# 4.将master2、master3加入集群,成为控制节点
kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918 --control-plane成功结果:Run 'kubectl get nodes' to see this node join the cluster.# 5.master2/3执行kubectl访问集群
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config# 6.查看
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE   VERSION
master    NotReady   control-plane   97m   v1.28.2
master2   NotReady   control-plane   85m   v1.28.2
master3   NotReady   control-plane   84m   v1.28.2

5.6 添加node节点进入集群

# 1.将node1加入集群作为工作节点[root@node1 containerd]# kubeadm join 10.1.16.202:16443 --token warf9k.w5m9ami6z4f73v1h --discovery-token-ca-cert-hash sha256:fa99f534d4940bcabff7a155582757af6a27c98360380f01b4ef8413dfa39918成功标志:Run 'kubectl get nodes' on the control-plane to see this node join the cluster.# 在任意master节点查看
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE    VERSION
master    NotReady   control-plane   109m   v1.28.2
master2   NotReady   control-plane   97m    v1.28.2
master3   NotReady   control-plane   96m    v1.28.2
node1     NotReady   <none>          67s    v1.28.2# 2.修改node节点 ROLES
[root@master k8s]# kubectl label node node1 node-role.kubernetes.io/worker=worker
node/node1 labeled
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE     VERSION
master    NotReady   control-plane   110m    v1.28.2
master2   NotReady   control-plane   98m     v1.28.2
master3   NotReady   control-plane   97m     v1.28.2
node1     NotReady   worker          2m48s   v1.28.2

六、在master01安装包管理器helm 3

# 查看最新版本https://github.com/helm/helm/releases

mkdir -p /usr/local/helm
wget https://get.helm.sh/helm-v3.11.2-linux-amd64.tar.gz
tar -zvxf helm-v3.15.3-linux-amd64.tar.gz
mv linux-amd64/helm  /usr/local/bin/
执行helm list确认没有错误输出。
helm version

七、安装kubernetes网络插件calico

选择calico作为k8s的Pod网络组件,下面使用helm在k8s集群中安装calico。
下载tigera-operator的helm chart:
wget https://github.com/projectcalico/calico/releases/download/v3.25.1/tigera-operator-v3.25.1.tgz

# 查看这个chart的中可定制的配置:
helm show values tigera-operator-v3.27.2.tgz新建values.yaml如下:
# 可针对上面的配置进行定制,例如calico的镜像改成从私有库拉取。
# 这里只是个人本地环境测试k8s新版本,这里只有下面几行配置
apiServer:enabled: false# 先拉取镜像
crictl pull quay.io/tigera/operator:v1.32.5
crictl pull docker.io/calico/cni:v3.27.2
crictl pull docker.io/calico/csi:v3.27.2
crictl pull docker.io/calico/kube-controllers:v3.27.2
crictl pull docker.io/calico/node-driver-registrar:v3.27.2
crictl pull docker.io/calico/node:v3.27.2
crictl pull docker.io/calico/pod2daemon-flexvol:v3.27.2
crictl pull docker.io/calico/typha:v3.27.2# 如不能下载,就尝试导出导入
ctr -n k8s.io image import operator.tar
ctr -n k8s.io image import cni.tar
ctr -n k8s.io image import csi.tar
ctr -n k8s.io image import kube-controllers.tar
ctr -n k8s.io image import node-driver-registrar.tar
ctr -n k8s.io image import node.tar
ctr -n k8s.io image import pod2daemon-flexvol.tar
ctr -n k8s.io image import typha.tar
ctr -n k8s.io image import busyboxplus.tar# 使用helm安装calico:
helm install calico tigera-operator-v3.27.2.tgz -n kube-system  --create-namespace -f values.yamlNAME: calico
LAST DEPLOYED: Fri Nov 10 09:19:36 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None# 查看calico容器运行情况,如状态一直没有到达Running状态,可尝试重启节点服务器。
[root@HQIOTMASTER07L ~]# kubectl get pod -n calico-system
NAME                                       READY   STATUS    RESTARTS       AGE
calico-kube-controllers-6c8966c899-k4b7t   1/1     Running   1 (7d7h ago)   7d7h
calico-node-bksbh                          1/1     Running   1 (7d7h ago)   7d7h
calico-node-kjqsq                          1/1     Running   0              7d7h
calico-node-lwhk9                          1/1     Running   1 (7d7h ago)   7d7h
calico-node-wdmws                          1/1     Running   0              7d7h
calico-node-xqkkq                          1/1     Running   1 (7d7h ago)   7d7h
calico-node-z56lx                          1/1     Running   0              7d7h
calico-typha-78f6f6c7dd-b8hfm              1/1     Running   1 (7d7h ago)   7d7h
calico-typha-78f6f6c7dd-kwjq2              1/1     Running   1 (7d7h ago)   7d7h
calico-typha-78f6f6c7dd-r2cjp              1/1     Running   1 (7d7h ago)   7d7h
csi-node-driver-452cl                      2/2     Running   0              7d7h
csi-node-driver-48bbw                      2/2     Running   2 (7d7h ago)   7d7h
csi-node-driver-52zbp                      2/2     Running   2 (7d7h ago)   7d7h
csi-node-driver-bnmzf                      2/2     Running   2 (7d7h ago)   7d7h
csi-node-driver-w2tfr                      2/2     Running   2 (7d7h ago)   7d7h
csi-node-driver-zw62c                      2/2     Running   2 (7d7h ago)   7d7h# 在master01验证k8s DNS是否可用
首次验证:
kubectl run curl --image=radial/busyboxplus:curl -itIf you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$
[ root@curl:/ ]$ exit# 重新进入相同的容器可继续执行命令
kubectl exec -it curl -- /bin/sh
进入后执行nslookup kubernetes.default确认解析正常:
[root@hqiotmaster07l yum.repos.d]# kubectl exec -it curl -- /bin/sh
/bin/sh: shopt: not found
[ root@curl:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

八、安装kubernetes反向代理ingress-nginx

添加ingress的helm仓库   
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm repo list
搜索ingress-nginx    
helm search repo ingress-nginx
helm search repo ingress-nginx/ingress-nginx -l安装ingress-nginx   等所有work节点加入后,执行
helm pull ingress-nginx/ingress-nginx --version 4.8.3
tar -zvxf ingress-nginx-4.8.3.tgz
cd ingress-nginx# 修改values.yaml
image:## Keep false as default for now!chroot: false# registry: registry.k8s.iorepository: 10.1.1.167/registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:tag: "v1.3.0"# digest: sha256:d1707ca76d3b044ab8a28277a2466a02100ee9f58a86af1535a3edf9323ea1b5# digestChroot: sha256:0fcb91216a22aae43b374fc2e6a03b8afe9e8c78cbf07a09d75636dc4ea3c191pullPolicy: IfNotPresentdnsPolicy: ClusterFirstWithHostNethostNetwork: truekind: DaemonSetnodeSelector:kubernetes.io/os: linuxingress: "true"ipFamilies:- IPv4ports:http: 80https: 443targetPorts:http: httphttps: httpstype: ClusterIPimage:# registry: registry.k8s.iorepository: 10.1.1.167/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:tag: v1.1.1# digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660pullPolicy: IfNotPresent# 给每个node节点打标签
kubectl label node haiotnode01l ingress=true
kubectl get node -L ingress# 创建命名空间 
kubectl create ns ingress-nginx# helm安装 
helm install ingress-nginx -n ingress-nginx .# 查看结果
NAME: ingress-nginx
LAST DEPLOYED: Thu Nov  9 17:30:42 2023
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller'An example Ingress that makes use of the controller:apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: examplenamespace: foospec:ingressClassName: nginxrules:- host: www.example.comhttp:paths:- pathType: Prefixbackend:service:name: exampleServiceport:number: 80path: /# This section is only required if TLS is to be enabled for the Ingresstls:- hosts:- www.example.comsecretName: example-tlsIf TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:apiVersion: v1kind: Secretmetadata:name: example-tlsnamespace: foodata:tls.crt: <base64 encoded cert>tls.key: <base64 encoded key>type: kubernetes.io/tls删除ingress-nginx
helm delete ingress-nginx -n ingress-nginx查询ingress-nginx
kubectl get all -n ingress-nginx[root@HQIOTMASTER07L ~]# kubectl get pod -n ingress-nginx
NAME                             READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-fljbs   1/1     Running   0          7d6h
ingress-nginx-controller-lhn9m   1/1     Running   0          7d6h
ingress-nginx-controller-w76v2   1/1     Running   0          7d6h

九、etcd配置为高可用状态

# 修改master、master2、master3上的配置文件etcd.yaml
vi /etc/kubernetes/manifests/etcd.yaml将
- --initial-cluster=hqiotmaster10l=https://10.1.16.160:2380
修改为
- --initial-cluster=hqiotmaster10l=https://10.1.16.160:2380,hqiotmaster11l=https://10.1.16.161:2380,hqiotmaster12l=https://10.1.16.162:2380

9.1 查看etcd集群是否配置成功

# etcdctl下载地址:https://github.com/etcd-io/etcd/releases
cd etcd-v3.5.9-linux-amd64
cp etcd* /usr/local/bin[root@HQIOTMASTER07L ~]# etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list
42cd16c4205e7bee, started, hqiotmaster07l, https://10.1.16.160:2380, https://10.1.16.160:2379, false
bb9be9499c3a8464, started, hqiotmaster09l, https://10.1.16.162:2380, https://10.1.16.162:2379, false
c8761c7050ca479a, started, hqiotmaster08l, https://10.1.16.161:2380, https://10.1.16.161:2379, false

[root@HQIOTMASTER07L ~]# etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://10.1.16.160:2379,https://10.1.16.161:2379,https://10.1.16.162:2379 endpoint status --cluster
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.1.16.160:2379 | 42cd16c4205e7bee |   3.5.9 |   15 MB |     false |      false |        11 |    2905632 |            2905632 |        |
| https://10.1.16.162:2379 | bb9be9499c3a8464 |   3.5.9 |   15 MB |     false |      false |        11 |    2905632 |            2905632 |        |
| https://10.1.16.161:2379 | c8761c7050ca479a |   3.5.9 |   16 MB |      true |      false |        11 |    2905632 |            2905632 |        |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

十、模拟k8s集群控制节点故障并快速恢复

问题:K8s 集群,公司里有 3 个控制节点和 3 个工作节点,有一个控制节点 master 出问题关机了,修复不成功,然后我们 kubectl delete nodes master 把 master1 移除,移除之后,把机器恢复了,上架了,我打算还这个机器加到 k8s 集群,还是做控制节点,如何做?
处理方法:https://www.cnblogs.com/yangmeichong/p/16464574.html
# 不管那个版本,命令一样的[root@master ~]# etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list[root@master ~]# ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key memrove a2f7e7fa1563203c

10.1 删除etcd节点

cd /root/etcd-v3.4.13-linux-amd64
cp etcd* /usr/local/bin# 查看etcd节点
[root@master etcd-v3.4.13-linux-amd64]# ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key  /etc/kubernetes/pki/etcd/server.key member list
9754d4208fa9e54b, started, master, https://192.168.1.10:2380, https://192.168.1.10:2379, false
b3688cea7fb0bfd6, started, pod1, https://192.168.1.11:2380, https://192.168.1.11:2379, false

# 找到pod1对应的hash值并删除
[root@master etcd-v3.4.13-linux-amd64]# ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove b3688cea7fb0bfd6
Member b3688cea7fb0bfd6 removed from cluster cbd4e4d0a63d294d# 查看
[root@master etcd-v3.4.13-linux-amd64]# ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert  /etc/kubernetes/pki/etcd/server.crt --key  /etc/kubernetes/pki/etcd/server.key member list
9754d4208fa9e54b, started, master, https://192.168.1.10:2380, https://192.168.1.10:2379, false

10.2 etcd节点重新加入k8s

# 1.加入集群命令:master上执行
[root@master etcd-v3.4.13-linux-amd64]# kubeadm token create --print-join-command
kubeadm join 192.168.1.20:16443 --token 2q0q3r.kmd36rm0vuuc1kcv     --discovery-token-ca-cert-hash sha256:6e220a97f3d79d0b53b5ac18979dcfacdfb5da5ce0629017b745a8a4df162d27# 2.master 执行:
[root@master etcd-v3.4.13-linux-amd64]# kubectl delete nodes pod1
node "pod1" deleted# 3.pod1上执行,被删除etcd的节点上执行
[root@pod1 ~]# kubeadm reset# 4.将master上kubernetes证书传到pod1
[root@master pki]# scp ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key pod1:/etc/kubernetes/pki/
ca.crt                                                                                                                                                                                                                                                             100% 1066   498.4KB/s   00:00    
ca.key                                                                                                                                                                                                                                                             100% 1679     1.5MB/s   00:00    
sa.key                                                                                                                                                                                                                                                             100% 1675     1.6MB/s   00:00    
sa.pub                                                                                                                                                                                                                                                             100%  451   553.5KB/s   00:00    
front-proxy-ca.crt                                                                                                                                                                                                                                                 100% 1078     1.1MB/s   00:00    
front-proxy-ca.key[root@pod1 ~]# cd /etc/kubernetes/pki/[root@master etcd]# scp ca.crt ca.key pod1:/etc/kubernetes/pki/etcd/
ca.crt                                                                                                                                                                                                                                                             100% 1058   921.3KB/s   00:00    
ca.key  # 在pod1上执行如下命令,把节点加入k8s集群,充当控制节点:
[root@pod1 pki]#kubeadm join 192.168.1.20:16443 --token 2q0q3r.kmd36rm0vuuc1kcv --discovery-token-ca-cert-hash sha256:6e220a97f3d79d0b53b5ac18979dcfacdfb5da5ce0629017b745a8a4df162d27 --control-plane[root@master etcd]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   4d2h    v1.20.7
pod1     Ready    control-plane,master   54s     v1.20.7
pod2     Ready    <none>                 3d14h   v1.20.7

十一、证书延长时间

11.1 查看证书有效时间

[root@HQIOTMASTER07L ~]# for item in `find /etc/kubernetes/pki -maxdepth 2 -name "*.crt"`;do openssl x509 -in $item -text -noout| grep Not;echo ======================$item===================;doneNot Before: Jul 25 07:23:27 2024 GMTNot After : Jul 23 07:28:27 2034 GMT
======================/etc/kubernetes/pki/ca.crt===================Not Before: Jul 30 03:24:26 2024 GMTNot After : Jul 28 03:24:26 2034 GMT
======================/etc/kubernetes/pki/apiserver.crt===================Not Before: Jul 30 03:24:26 2024 GMTNot After : Jul 28 03:24:26 2034 GMT
======================/etc/kubernetes/pki/apiserver-kubelet-client.crt===================Not Before: Jul 25 07:23:27 2024 GMTNot After : Jul 23 07:28:27 2034 GMT
======================/etc/kubernetes/pki/front-proxy-ca.crt===================Not Before: Jul 30 03:24:26 2024 GMTNot After : Jul 28 03:24:26 2034 GMT
======================/etc/kubernetes/pki/front-proxy-client.crt===================Not Before: Jul 25 07:23:28 2024 GMTNot After : Jul 23 07:28:28 2034 GMT
======================/etc/kubernetes/pki/etcd/ca.crt===================Not Before: Jul 30 03:24:26 2024 GMTNot After : Jul 28 03:24:26 2034 GMT
======================/etc/kubernetes/pki/etcd/server.crt===================Not Before: Jul 30 03:24:26 2024 GMTNot After : Jul 28 03:24:26 2034 GMT
======================/etc/kubernetes/pki/etcd/peer.crt===================Not Before: Jul 30 03:24:26 2024 GMTNot After : Jul 28 03:24:26 2034 GMT
======================/etc/kubernetes/pki/etcd/healthcheck-client.crt===================Not Before: Jul 30 03:24:26 2024 GMTNot After : Jul 28 03:24:26 2034 GMT
======================/etc/kubernetes/pki/apiserver-etcd-client.crt===================

11.2 延长证书脚本

#脚本转载自https://github.com/yuyicai/update-kube-cert
#!/usr/bin/env bashset -o errexit
set -o pipefail
# set -o xtrace# set output color
NC='\033[0m'
RED='\033[31m'
GREEN='\033[32m'
YELLOW='\033[33m'
BLUE='\033[34m'
# set default cri
CRI="docker"log::err() {printf "[$(date +'%Y-%m-%dT%H:%M:%S.%2N%z')][${RED}ERROR${NC}] %b\n" "$@"
}log::info() {printf "[$(date +'%Y-%m-%dT%H:%M:%S.%2N%z')][INFO] %b\n" "$@"
}log::warning() {printf "[$(date +'%Y-%m-%dT%H:%M:%S.%2N%z')][${YELLOW}WARNING${NC}] \033[0m%b\n" "$@"
}check_file() {if [[ ! -r ${1} ]]; thenlog::err "can not find ${1}"exit 1fi
}# get x509v3 subject alternative name from the old certificate
cert::get_subject_alt_name() {local cert=${1}.crtlocal alt_namecheck_file "${cert}"alt_name=$(openssl x509 -text -noout -in "${cert}" | grep -A1 'Alternative' | tail -n1 | sed 's/[[:space:]]*Address//g')printf "%s\n" "${alt_name}"
}# get subject from the old certificate
cert::get_subj() {local cert=${1}.crtlocal subjcheck_file "${cert}"subj=$(openssl x509 -text -noout -in "${cert}" | grep "Subject:" | sed 's/Subject:/\//g;s/\,/\//;s/[[:space:]]//g')printf "%s\n" "${subj}"
}cert::backup_file() {local file=${1}if [[ ! -e ${file}.old-$(date +%Y%m%d) ]]; thencp -rp "${file}" "${file}.old-$(date +%Y%m%d)"log::info "backup ${file} to ${file}.old-$(date +%Y%m%d)"elselog::warning "does not backup, ${file}.old-$(date +%Y%m%d) already exists"fi
}# check certificate expiration
cert::check_cert_expiration() {local cert=${1}.crtlocal cert_expirescert_expires=$(openssl x509 -text -noout -in "${cert}" | awk -F ": " '/Not After/{print$2}')printf "%s\n" "${cert_expires}"
}# check kubeconfig expiration
cert::check_kubeconfig_expiration() {local config=${1}.conflocal certlocal cert_expirescert=$(grep "client-certificate-data" "${config}" | awk '{print$2}' | base64 -d)cert_expires=$(openssl x509 -text -noout -in <(printf "%s" "${cert}") | awk -F ": " '/Not After/{print$2}')printf "%s\n" "${cert_expires}"
}# check etcd certificates expiration
cert::check_etcd_certs_expiration() {local certlocal certscerts=("${ETCD_CERT_CA}""${ETCD_CERT_SERVER}""${ETCD_CERT_PEER}""${ETCD_CERT_HEALTHCHECK_CLIENT}""${ETCD_CERT_APISERVER_ETCD_CLIENT}")for cert in "${certs[@]}"; doif [[ ! -r ${cert} ]]; thenprintf "%-50s%-30s\n" "${cert}.crt" "$(cert::check_cert_expiration "${cert}")"fidone
}# check master certificates expiration
cert::check_master_certs_expiration() {local certslocal kubeconfslocal certlocal confcerts=("${CERT_CA}""${CERT_APISERVER}""${CERT_APISERVER_KUBELET_CLIENT}""${FRONT_PROXY_CA}""${FRONT_PROXY_CLIENT}")# add support for super_admin.conf, which was added after k8s v1.30.if [ -f "${CONF_SUPER_ADMIN}.conf" ]; thenkubeconfs=("${CONF_CONTROLLER_MANAGER}""${CONF_SCHEDULER}""${CONF_ADMIN}""${CONF_SUPER_ADMIN}")else kubeconfs=("${CONF_CONTROLLER_MANAGER}""${CONF_SCHEDULER}""${CONF_ADMIN}")fiprintf "%-50s%-30s\n" "CERTIFICATE" "EXPIRES"for conf in "${kubeconfs[@]}"; doif [[ ! -r ${conf} ]]; thenprintf "%-50s%-30s\n" "${conf}.config" "$(cert::check_kubeconfig_expiration "${conf}")"fidonefor cert in "${certs[@]}"; doif [[ ! -r ${cert} ]]; thenprintf "%-50s%-30s\n" "${cert}.crt" "$(cert::check_cert_expiration "${cert}")"fidone
}# check all certificates expiration
cert::check_all_expiration() {cert::check_master_certs_expirationcert::check_etcd_certs_expiration
}# generate certificate whit client, server or peer
# Args:
#   $1 (the name of certificate)
#   $2 (the type of certificate, must be one of client, server, peer)
#   $3 (the subject of certificates)
#   $4 (the validity of certificates) (days)
#   $5 (the name of ca)
#   $6 (the x509v3 subject alternative name of certificate when the type of certificate is server or peer)
cert::gen_cert() {local cert_name=${1}local cert_type=${2}local subj=${3}local cert_days=${4}local ca_name=${5}local alt_name=${6}local ca_cert=${ca_name}.crtlocal ca_key=${ca_name}.keylocal cert=${cert_name}.crtlocal key=${cert_name}.keylocal csr=${cert_name}.csrlocal common_csr_conf='distinguished_name = dn\n[dn]\n[v3_ext]\nkeyUsage = critical, digitalSignature, keyEncipherment\n'for file in "${ca_cert}" "${ca_key}" "${cert}" "${key}"; docheck_file "${file}"donecase "${cert_type}" inclient)csr_conf=$(printf "%bextendedKeyUsage = clientAuth\n" "${common_csr_conf}");;server)csr_conf=$(printf "%bextendedKeyUsage = serverAuth\nsubjectAltName = %b\n" "${common_csr_conf}" "${alt_name}");;peer)csr_conf=$(printf "%bextendedKeyUsage = serverAuth, clientAuth\nsubjectAltName = %b\n" "${common_csr_conf}" "${alt_name}");;*)log::err "unknow, unsupported certs type: ${YELLOW}${cert_type}${NC}, supported type: client, server, peer"exit 1;;esac# gen csropenssl req -new -key "${key}" -subj "${subj}" -reqexts v3_ext \-config <(printf "%b" "${csr_conf}") \-out "${csr}" >/dev/null 2>&1# gen certopenssl x509 -in "${csr}" -req -CA "${ca_cert}" -CAkey "${ca_key}" -CAcreateserial -extensions v3_ext \-extfile <(printf "%b" "${csr_conf}") \-days "${cert_days}" -out "${cert}" >/dev/null 2>&1rm -f "${csr}"
}cert::update_kubeconf() {local cert_name=${1}local kubeconf_file=${cert_name}.conflocal cert=${cert_name}.crtlocal key=${cert_name}.keylocal subjlocal cert_base64check_file "${kubeconf_file}"# get the key from the old kubeconfgrep "client-key-data" "${kubeconf_file}" | awk '{print$2}' | base64 -d >"${key}"# get the old certificate from the old kubeconfgrep "client-certificate-data" "${kubeconf_file}" | awk '{print$2}' | base64 -d >"${cert}"# get subject from the old certificatesubj=$(cert::get_subj "${cert_name}")cert::gen_cert "${cert_name}" "client" "${subj}" "${CERT_DAYS}" "${CERT_CA}"# get certificate base64 codecert_base64=$(base64 -w 0 "${cert}")# set certificate base64 code to kubeconfsed -i 's/client-certificate-data:.*/client-certificate-data: '"${cert_base64}"'/g' "${kubeconf_file}"rm -f "${cert}"rm -f "${key}"
}cert::update_etcd_cert() {local subjlocal subject_alt_namelocal cert# generate etcd server,peer certificate# /etc/kubernetes/pki/etcd/server# /etc/kubernetes/pki/etcd/peerfor cert in ${ETCD_CERT_SERVER} ${ETCD_CERT_PEER}; dosubj=$(cert::get_subj "${cert}")subject_alt_name=$(cert::get_subject_alt_name "${cert}")cert::gen_cert "${cert}" "peer" "${subj}" "${CERT_DAYS}" "${ETCD_CERT_CA}" "${subject_alt_name}"log::info "${GREEN}updated ${BLUE}${cert}.conf${NC}"done# generate etcd healthcheck-client,apiserver-etcd-client certificate# /etc/kubernetes/pki/etcd/healthcheck-client# /etc/kubernetes/pki/apiserver-etcd-clientfor cert in ${ETCD_CERT_HEALTHCHECK_CLIENT} ${ETCD_CERT_APISERVER_ETCD_CLIENT}; dosubj=$(cert::get_subj "${cert}")cert::gen_cert "${cert}" "client" "${subj}" "${CERT_DAYS}" "${ETCD_CERT_CA}"log::info "${GREEN}updated ${BLUE}${cert}.conf${NC}"done# restart etcdcase $CRI in"docker")docker ps | awk '/k8s_etcd/{print$1}' | xargs -r -I '{}' docker restart {} >/dev/null 2>&1 || true;;"containerd")crictl ps | awk '/etcd-/{print$(NF-1)}' | xargs -r -I '{}' crictl stopp {} >/dev/null 2>&1 || true;;esaclog::info "restarted etcd with ${CRI}"
}cert::update_master_cert() {local subjlocal subject_alt_namelocal conf# generate apiserver server certificate# /etc/kubernetes/pki/apiserversubj=$(cert::get_subj "${CERT_APISERVER}")subject_alt_name=$(cert::get_subject_alt_name "${CERT_APISERVER}")cert::gen_cert "${CERT_APISERVER}" "server" "${subj}" "${CERT_DAYS}" "${CERT_CA}" "${subject_alt_name}"log::info "${GREEN}updated ${BLUE}${CERT_APISERVER}.crt${NC}"# generate apiserver-kubelet-client certificate# /etc/kubernetes/pki/apiserver-kubelet-clientsubj=$(cert::get_subj "${CERT_APISERVER_KUBELET_CLIENT}")cert::gen_cert "${CERT_APISERVER_KUBELET_CLIENT}" "client" "${subj}" "${CERT_DAYS}" "${CERT_CA}"log::info "${GREEN}updated ${BLUE}${CERT_APISERVER_KUBELET_CLIENT}.crt${NC}"# generate kubeconf for controller-manager,scheduler and kubelet# /etc/kubernetes/controller-manager,scheduler,admin,kubelet.conf,super_admin(added after k8s v1.30.)if [ -f "${CONF_SUPER_ADMIN}.conf" ]; thenconf_list="${CONF_CONTROLLER_MANAGER} ${CONF_SCHEDULER} ${CONF_ADMIN} ${CONF_KUBELET} ${CONF_SUPER_ADMIN}"else conf_list="${CONF_CONTROLLER_MANAGER} ${CONF_SCHEDULER} ${CONF_ADMIN} ${CONF_KUBELET}"fifor conf in ${conf_list}; doif [[ ${conf##*/} == "kubelet" ]]; then# https://github.com/kubernetes/kubeadm/issues/1753set +egrep kubelet-client-current.pem /etc/kubernetes/kubelet.conf >/dev/null 2>&1kubelet_cert_auto_update=$?set -eif [[ "$kubelet_cert_auto_update" == "0" ]]; thenlog::info "does not need to update kubelet.conf"continuefifi# update kubeconfcert::update_kubeconf "${conf}"log::info "${GREEN}updated ${BLUE}${conf}.conf${NC}"# copy admin.conf to ${HOME}/.kube/configif [[ ${conf##*/} == "admin" ]]; thenmkdir -p "${HOME}/.kube"local config=${HOME}/.kube/configlocal config_backupconfig_backup=${HOME}/.kube/config.old-$(date +%Y%m%d)if [[ -f ${config} ]] && [[ ! -f ${config_backup} ]]; thencp -fp "${config}" "${config_backup}"log::info "backup ${config} to ${config_backup}"ficp -fp "${conf}.conf" "${HOME}/.kube/config"log::info "copy the admin.conf to ${HOME}/.kube/config"fidone# generate front-proxy-client certificate# /etc/kubernetes/pki/front-proxy-clientsubj=$(cert::get_subj "${FRONT_PROXY_CLIENT}")cert::gen_cert "${FRONT_PROXY_CLIENT}" "client" "${subj}" "${CERT_DAYS}" "${FRONT_PROXY_CA}"log::info "${GREEN}updated ${BLUE}${FRONT_PROXY_CLIENT}.crt${NC}"# restart apiserver, controller-manager, scheduler and kubeletfor item in "apiserver" "controller-manager" "scheduler"; docase $CRI in"docker")docker ps | awk '/k8s_kube-'${item}'/{print$1}' | xargs -r -I '{}' docker restart {} >/dev/null 2>&1 || true;;"containerd")crictl ps | awk '/kube-'${item}'-/{print $(NF-1)}' | xargs -r -I '{}' crictl stopp {} >/dev/null 2>&1 || true;;esaclog::info "restarted ${item} with ${CRI}"donesystemctl restart kubelet || truelog::info "restarted kubelet"
}main() {local node_type=$1# read the optionsARGS=`getopt -o c: --long cri: -- "$@"`eval set -- "$ARGS"# extract options and their arguments into variables.while truedocase "$1" in-c|--cri)case "$2" in"docker"|"containerd")CRI=$2shift 2;;*)echo 'Unsupported cri. Valid options are "docker", "containerd".'exit 1;;esac;;--)shiftbreak;;*)echo "Invalid arguments."exit 1;;esacdoneCERT_DAYS=3650KUBE_PATH=/etc/kubernetesPKI_PATH=${KUBE_PATH}/pki# master certificates path# apiserverCERT_CA=${PKI_PATH}/caCERT_APISERVER=${PKI_PATH}/apiserverCERT_APISERVER_KUBELET_CLIENT=${PKI_PATH}/apiserver-kubelet-clientCONF_CONTROLLER_MANAGER=${KUBE_PATH}/controller-managerCONF_SCHEDULER=${KUBE_PATH}/schedulerCONF_ADMIN=${KUBE_PATH}/adminCONF_SUPER_ADMIN=${KUBE_PATH}/super-adminCONF_KUBELET=${KUBE_PATH}/kubelet# front-proxyFRONT_PROXY_CA=${PKI_PATH}/front-proxy-caFRONT_PROXY_CLIENT=${PKI_PATH}/front-proxy-client# etcd certificates pathETCD_CERT_CA=${PKI_PATH}/etcd/caETCD_CERT_SERVER=${PKI_PATH}/etcd/serverETCD_CERT_PEER=${PKI_PATH}/etcd/peerETCD_CERT_HEALTHCHECK_CLIENT=${PKI_PATH}/etcd/healthcheck-clientETCD_CERT_APISERVER_ETCD_CLIENT=${PKI_PATH}/apiserver-etcd-clientcase ${node_type} in# etcd)# # update etcd certificates#   cert::update_etcd_cert# ;;master)# check certificates expirationcert::check_master_certs_expiration# backup $KUBE_PATH to $KUBE_PATH.old-$(date +%Y%m%d)cert::backup_file "${KUBE_PATH}"# update master certificates and kubeconflog::info "${GREEN}updating...${NC}"cert::update_master_certlog::info "${GREEN}done!!!${NC}"# check certificates expiration after certificates updatedcert::check_master_certs_expiration;;all)# check certificates expirationcert::check_all_expiration# backup $KUBE_PATH to $KUBE_PATH.old-$(date +%Y%m%d)cert::backup_file "${KUBE_PATH}"# update etcd certificateslog::info "${GREEN}updating...${NC}"cert::update_etcd_cert# update master certificates and kubeconfcert::update_master_certlog::info "${GREEN}done!!!${NC}"# check certificates expiration after certificates updatedcert::check_all_expiration;;check)# check certificates expirationcert::check_all_expiration;;*)log::err "unknown, unsupported cert type: ${node_type}, supported type: \"all\", \"master\""printf "Documentation: https://github.com/yuyicai/update-kube-cert
  example:'\033[32m./update-kubeadm-cert.sh all\033[0m' update all etcd certificates, master certificates and kubeconf/etc/kubernetes├── admin.conf├── super-admin.conf├── controller-manager.conf├── scheduler.conf├── kubelet.conf└── pki├── apiserver.crt├── apiserver-etcd-client.crt├── apiserver-kubelet-client.crt├── front-proxy-client.crt└── etcd├── healthcheck-client.crt├── peer.crt└── server.crt'\033[32m./update-kubeadm-cert.sh master\033[0m' update only master certificates and kubeconf/etc/kubernetes├── admin.conf├── super-admin.conf├── controller-manager.conf├── scheduler.conf├── kubelet.conf└── pki├── apiserver.crt├── apiserver-kubelet-client.crt└── front-proxy-client.crt
"
    exit 1;;esac
}main "$@"

11.3 执行脚本

下面操作都是在每个 master 中执行,
cd /etc/yum.repos.d/上传文件  
update-kubeadm-cert.sh
chmod 755 update-kubeadm-cert.sh更新证书
./update-kubeadm-cert.sh all --cri containerd

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/779638.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

CF708C Centroids

题意 来自洛谷:思路 记录每个点 \(u\) 所在子树可以删去的最大的部分 \(part1\) 和次大的部分 \(part2\) 和除了 \(u\) 的子树以外的部分可以删去的最大的部分 \(up\),这个部分必须要求小于等于 \(\dfrac{n}{2}\),和找树的中心(注意不是重心)的思路差不多。 注意:\(part1…

portainer 安装

第一步:在docker仓库查找portainer的版本 docker search portainer 注意,docker portainer有两个版本:portainer/portainer 和 portainer/portainer-ce:portainer/portainer是早期版本,已不再更新 portainer/portainer-ce是新版本,使用新版本docker pull portainer/porta…

vscode打开关闭编辑区域顶部固定区域 顶部有一段行数距离高度 StickyScroll

在定位区域右键,粘滞滚动取消选择即可打开方式,在设置搜索stickyscroll 勾选即可 好好学习天天向上

hive04_DQL操作

注意点 全局排序 OrderBy SELECT <select_expression>, <select_expression>, ...FROM <table_name>ORDER BY <col_name> [ASC|DESC] [,col_name [ASC|DESC], ...]Hive 中使用全局排序时,会将所有数据交给一个 Reduce 任务进行计算,实现查询结果的全…

自定义注解(Annotation)

一、概念 Annotation(注解)是 Java 提供的一种对元程序中元素关联信息和元数据(metadata)的途径和方法。 Annatation(注解)是一个接口,程序可以通过反射来获取指定程序中元素的 Annotation对象,然后通过该Annotation 对象来获取注解中的元数据信息。 二、 4 种标准元注解…

Git合并之————指定提交记录合并

应用场景在测试环境提交了多个功能代码, 其中一个功能需要提前上线如图所示, 红框部分为我本次需要上线的功能提交记录代码, 绿框部分为我已选择上线成功, 可以看到红框与绿框直接的内容并没有被带入master分支.这里我以IDEA为例.首先, 切换到master分支, 也就是你需要合并的分…

宝藏软件 | 推荐一个免费电影app(无广告不卡顿)

就简单介绍一下吧,我要推荐的这个电影app优点是免费无广告,不卡顿。喜欢看电影的我们应该都是对广告深恶痛绝的,卡顿也是平常看电影的一个痛点,有时卡一下卡一下真的很不爽。这个app也有缺点,就是影视资源不好找,需要有一点技巧才能找到大量资源。我整理了一个影视资源包…

MySQL 5.7 DDL 与 GH-OST 对比分析

本文首先介绍MySQL5.7 DDL以及GH-OST的原理,然后从效率、空间占用、锁阻塞、binlog日志产生量、主备延时等方面,对比GH-OST和MySQL5.7 DDL的差异。作者:来自 vivo 互联网存储研发团队- Xia Qianyong本文首先介绍MySQL 5.7 DDL以及GH-OST的原理,然后从效率、空间占用、锁阻塞…

我叫小舞,跳舞的舞!新斗罗大陆游戏详细图文架设教程

“我叫小舞,跳舞的舞”,唐三不知道的是,这个叫做小舞的十万年魂兽会成为他一生的羁绊。就让我们一起再一次去斗罗大陆感受唐三和小舞的爱恋吧~前言 “我叫小舞,跳舞的舞”,唐三不知道的是,这个叫做小舞的十万年魂兽会成为他一生的羁绊。在星斗大森林被武魂殿追杀生死存亡…

聚焦OLAP性能提升,火山引擎ByteHouse发布六大场景解决方案

随着数字化转型的深入,不仅仅是电商领域,各行业都需要基于实时数据做出关键决策。而对企业底层数据系统来说,在处理复杂的数据分析任务,如多维度的关联分析或数据挖掘时,可能出现延迟、崩溃、错误等问题,导致数据不准确、不完整、时效性差。随着数字化转型的深入,不仅仅…

AI挑战高考作文,看AI怎么写自己?

本文由 ChatMoney团队出品现在已经离高考过去快二个月了,不管是考的好或不好,都是人生中难以忘记的一个拐点,那么在现如今AI时代,我不禁想到,如果用人工智能参加高考,会得多少分呢? 不知道朋友们有没有关注今年的高考作文题目呢?有部分考生反馈,很庆幸,自己的语文老师…