基于Containerd容器引擎和kubeadm工具部署K8sv1.26.3

 kubernetes master节点及node节点各核心组件的功能概述

  master节点主要组件概述

  master节点主要是k8s的控制节点,在master节点上主要有三个组件必不可少,apiserver、scheduler和controllermanager;etcd是集群的数据库,是非常核心的组件,一般是单独的一个集群;主要用来存储k8s集群各组件的配置信息以及各类状态信息;

  apiserver:apiserver组件主要负责接入请求、认证用户合法性以及提交资源请求的准入控制,是管理k8s集群的唯一入口,也是读写etcd中数据的唯一组件;其工作原理是集群各组件通过tcp会话,一直监视(watch)apiserver之上的资源变化,一旦有与之相关的资源变化,对应组件会通过apiserver到etcd中去获取应配置信息,然后根据etcd中的配置信息,做对应的操作,然后把操作后的状态信息再通过apiserver写入etcd中;apiserver认证和准入控制过程,用户通过https将管理集群的请求发送给apiserver,apiserver收到对应请求后,首先会验证用户的身份信息以及合法性;这个认证主要通过用户提供的证书信息;如果用户提供的证书信息apiserver能够再etcd中完全匹配到对应信息,那么apiserver会认为该用户是一个合法的用户;除此之外,apiserver还会对用户提交的资源请求进行准入控制,所谓准入控制是指对应用户提交的资源请求做语法格式检查,如果用户提交的请求,不满足apiserver中各api的格式或语法定义,则对应请求同样会被禁止,只有通过了apiserver的认证和准入控制规则以后,对应资源请求才会通过apiserver存入etcd中或从etcd中获取,然后被其他组件通过watch机制去发现与自己相关的消息事件,从而完成用户的资源请求;简单讲apiserver主要作用就是接入用户请求(这里主要是指管理员的各种管理请求和各组件链接apiserver的请求)认证用户合法性以及提交资源请求的准入控制;集群其他组件的数据的读写是不能直接在etcd中读写的,必须通过apiserver代理它们的请求到etcd中做对应数据的操作;

  scheduler:scheduler主要负责为待调度Pod列表的每个Pod从可用Node列表中选择一个最适合的Node,并将信息写入etcd中。node节点上的kubelet通过APIServer监听到kubernetesScheduler产生的Pod绑定信息,然后获取对应的Pod清单,下载Image,并启动容器;这个过程会经历两个阶段,第一个阶段时预选阶段,所谓预选就是通过判断pod所需的卷是否和节点已存在的卷冲突,判读备选节点的资源是否满足备选pod的需求,判断备选系欸但是否包含备选pod的标签选择器指定的标签,以及节点亲和性、污点和污点容忍度的关系来判断pod是否满足节点容忍度的一些条件;来筛选掉不满足pod运行的节点,剩下的节点会进入第二阶段的优选过程;所谓优选就是在通过预选策略留下来的节点中再通过优选策略来选择一个最适合运行pod的节点来;如优先从备用节点列表中选择消耗资源最小的节点(通过cpu和内存来计算资源得分,消耗越少,得分越高,即谁的得分高就把pod调度到对应节点上);优先选择含有指定Label的节点。优先从备选节点列表中选择各项资源使用率最均衡的节点。使用Pod中tolerationList与节点Taint进行匹配并实现pod调度;简单讲scheduler主要作用就是为pod找到一个合适运行的node;通过预选和优选两个节点来实现pod的调度,然后把调度信息写入到etcd中;

  controllermanager:controllermanager主要用来管理集群的其他控制器,比如副本控制器、节点控制器、命名空间控制器和服务账号控制器等;控制器作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,ControllerManager会及时发现并执行自动化修复流程,确保集群中的pod副本始终处于预期的工作状态。controller-manager控制器每间隔5秒检查一次节点的状态。如果controller-manager控制器没有收到自节点的心跳,则将该node节点被标记为不可达。controller-manager将在标记为无法访问之前等待40秒。如果该node节点被标记为无法访问后5分钟还没有恢复,controller-manager会删除当前node节点的所有pod并在其它可用节点重建这些pod;简单讲controllermanager主要负责集群pod副本始终处于用于期望的状态,如果对应pod不满足用户期望状态,则controllermanager会调用对应控制器,通过重启或重建的方式让对应pod始终和用户期望状态保持一致;

  node节点主要组件概述

  kube-proxy:kube-proxy是k8s网络代理,运行在node之上它反映了node上KubernetesAPI中定义的服务,并可以通过一组后端进行简单的TCP、UDP和SCTP流转发或者在一组后端进行循环TCP、UDP和SCTP转发,用户必须使用apiserverAPI创建一个服务来配置代理,其实就是kube-proxy通过在主机上维护网络规则并执行连接转发来实现Kubernetes服务访问;kube-proxy运行在每个节点上,监听APIServer中服务对象的变化,再通过管理IPtables或者IPVS规则来实现网络的转发;IPVS相对IPtables效率会更高一些,使用IPVS模式需要在运行Kube-Proxy的节点上安装ipvsadm、ipset工具包和加载ip_vs内核模块,当Kube-Proxy以IPVS代理模式启动时,Kube-Proxy将验证节点上是否安装了IPVS模块,如果未安装,则Kube-Proxy将回退到IPtables代理模式;使用IPVS模式,Kube-Proxy会监视KubernetesService对象和Endpoints,调用宿主机内核Netlink接口以相应地创建IPVS规则并定期与KubernetesService对象Endpoints对象同步IPVS规则,以确保IPVS状态与期望一致,访问服务时,流量将被重定向到其中一个后端Pod,IPVS使用哈希表作为底层数据结构并在内核空间中工作,这意味着IPVS可以更快地重定向流量,并且在同步代理规则时具有更好的性能,此外,IPVS为负载均衡算法提供了更多选项,例如:rr(轮询调度)、lc(最小连接数)、dh(目标哈希)、sh(源哈希)、sed(最短期望延迟)、nq(不排队调度)等。kubernetesv1.11之后默认使用IPVS,默认调度算法为rr;

  kubelet:kubelet是运行在每个worker节点的代理组件,它会监视已分配给节点的pod;它主要功能有,向master汇报node节点的状态信息;接受指令并在Pod中创建docker容器;准备Pod所需的数据卷;返回pod的运行状态;在node节点执行容器健康检查;

  客户端工具组件

  1、命令行工具kubectl:它时一个通过命令行对k8s集群进行管理的客户端工具;工作逻辑是,默认情况是在用户家目录的.kube目录中查找一个名为config的配置文件,这个配置文件主要是保存用于连接k8s集群的认证信息;当然我们也可以使用设置KUBECONFIG环境变量或者使用--kubeconfig参数来指定kubeconfig配置文件来使用此工具;

  2、Dashboard:该工具是基于web网页的k8s客户端工具;我们可以使用该web页面查看集群中的应用概览信息,也可以创建或者修改k8s资源,也可以对deployment实现弹性伸缩、发起滚动升级、删除pod或这是用向导创建新的应用;

  基于ubuntu2204,containerd为容器引擎部署K8S

  1、系统优化

   添加内核参数

1

2

3

4

5

6

7

8

9

10

11

12

root@k8s-master01:~# cat <<EOF >>/etc/sysctl.conf

> net.ipv4.ip_forward=1

> vm.max_map_count=262144

> kernel.pid_max=4194303

> fs.file-max=1000000

> net.ipv4.tcp_max_tw_buckets=6000

> net.netfilter.nf_conntrack_max=2097152

> net.bridge.bridge-nf-call-ip6tables = 1

> net.bridge.bridge-nf-call-iptables = 1

> vm.swappiness=0

> EOF

root@k8s-master01:~#

  应用上述配置,使其内核参数生效

1

2

3

4

5

6

7

8

9

10

11

root@k8s-master01:~# sysctl -p

net.ipv4.ip_forward = 1

vm.max_map_count = 262144

kernel.pid_max = 4194303

fs.file-max = 1000000

net.ipv4.tcp_max_tw_buckets = 6000

net.netfilter.nf_conntrack_max = 2097152

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

vm.swappiness = 0

root@k8s-master01:~#

  提示:这里提示bridge-nf-call-ip6tables和bridge-nf-call-iptables没有找到文件;其原因是br_netfilter模块没有挂载;

  挂载br_netfilter内核模块

1

2

3

4

5

6

root@k8s-master01:~# lsmod |grep br_netfilter

root@k8s-master01:~# modprobe br_netfilter

root@k8s-master01:~# lsmod |grep br_netfilter

br_netfilter           32768  0

bridge                307200  1 br_netfilter

root@k8s-master01:~#

  再次使用sysctl -p命令看看 bridge-nf-call-ip6tables和bridge-nf-call-iptables是否还会包未找到文件呢?

1

2

3

4

5

6

7

8

9

10

11

root@k8s-master01:~# sysctl -p

net.ipv4.ip_forward = 1

vm.max_map_count = 262144

kernel.pid_max = 4194303

fs.file-max = 1000000

net.ipv4.tcp_max_tw_buckets = 6000

net.netfilter.nf_conntrack_max = 2097152

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vm.swappiness = 0

root@k8s-master01:~#

  内核模块开机挂载

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

root@k8s-master01:~# cat <<EOF >>/etc/modules-load.d/modules.conf

> ip_vs

> ip_vs_lc

> ip_vs_lblc

> ip_vs_lblcr

> ip_vs_rr

> ip_vs_wrr

> ip_vs_sh

> ip_vs_dh

> ip_vs_fo

> ip_vs_nq

> ip_vs_sed

> ip_vs_ftp

> ip_vs_sh

> ip_tables

> ip_set

> ipt_set

> ipt_rpfilter

> ipt_REJECT

> ipip

> xt_set

> br_netfilter

> nf_conntrack

> overlay

> EOF

root@k8s-master01:~#

  提示:上述配置需要重启或者使用modprobe命令分别加载;如: tail -23  /etc/modules-load.d/modules.conf |xargs -L1 modprobe;

  安装ipvs管理工具和一些依赖包

1

root@k8s-master01:~# apt update && apt install bash-completion conntrack ipset ipvsadm jq libseccomp2 nfs-common psmisc rsync socat -y

  配置资源限制

1

2

3

4

5

6

7

8

9

10

11

12

root@k8s-master01:~# tail -10 /etc/security/limits.conf

root    soft    core            unlimited

root    hard    core            unlimited

root    soft    nproc           1000000

root    hard    nproc           1000000

root    soft    nofile          1000000

root    hard    nofile          1000000

root    soft    memlock         32000

root    hard    memlock         32000

root    soft    msgqueue        8192000

root    hard    msgqueue        8192000

root@k8s-master01:~#

  提示:配置上述资源限制重启服务器生效;

  禁用swap设备

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

root@k8s-master01:~# free -mh

               total        used        free      shared  buff/cache   available

Mem:           3.8Gi       330Mi       2.2Gi       1.0Mi       1.3Gi       3.2Gi

Swap:          3.8Gi          0B       3.8Gi

root@k8s-master01:~# swapoff -a

root@k8s-master01:~# free -mh 

               total        used        free      shared  buff/cache   available

Mem:           3.8Gi       329Mi       2.2Gi       1.0Mi       1.3Gi       3.2Gi

Swap:             0B          0B          0B

root@k8s-master01:~# cat /etc/fstab

# /etc/fstab: static file system information.

#

# Use 'blkid' to print the universally unique identifier for a

# device; this may be used with UUID= as a more robust way to name devices

# that works even if disks are added and removed. See fstab(5).

#

# <file system> <mount point>   <type>  <options>       <dump>  <pass>

# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation

/dev/disk/by-id/dm-uuid-LVM-yecQxSAXrKdCNj1XNrQeaacvLAmKdL5SVadOXV0zHSlfkdpBEsaVZ9erw8Ac9gpm / ext4 defaults 0 1

# /boot was on /dev/sda2 during curtin installation

/dev/disk/by-uuid/80fe59b8-eb79-4ce9-a87d-134bc160e976 /boot ext4 defaults 0 1

#/swap.img      none    swap    sw      0       0

root@k8s-master01:~#

  2、基于脚本自动化安装containerd

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

root@k8s-master01:/usr/local/src# cat runtime-install.sh

#!/bin/bash

DIR=`pwd`

PACKAGE_NAME="docker-20.10.19.tgz"

DOCKER_FILE=${DIR}/${PACKAGE_NAME}

#read -p "请输入使用docker server的普通用户名称,默认为docker:" USERNAME

if test -z ${USERNAME};then

  USERNAME=docker

fi

centos_install_docker(){

  grep "Kernel" /etc/issue &> /dev/null

  if [ $? -eq 0 ];then

    /bin/echo  "当前系统是`cat /etc/redhat-release`,即将开始系统初始化、配置docker-compose与安装docker" && sleep 1

    systemctl stop firewalld && systemctl disable firewalld && echo "防火墙已关闭" && sleep 1

    systemctl stop NetworkManager && systemctl disable NetworkManager && echo "NetworkManager" && sleep 1

    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux && setenforce  0 && echo "selinux 已关闭" && sleep 1

    \cp ${DIR}/limits.conf /etc/security/limits.conf

    \cp ${DIR}/sysctl.conf /etc/sysctl.conf

    /bin/tar xvf ${DOCKER_FILE}

    \cp docker/*  /usr/local/bin

    mkdir /etc/docker && \cp daemon.json /etc/docker

    \cp containerd.service /lib/systemd/system/containerd.service

    \cp docker.service  /lib/systemd/system/docker.service

    \cp docker.socket /lib/systemd/system/docker.socket

    \cp ${DIR}/docker-compose-Linux-x86_64_1.28.6 /usr/bin/docker-compose

     

    groupadd docker && useradd docker -s /sbin/nologin -g docker

    id -u  ${USERNAME} &> /dev/null

    if [ $? -ne 0 ];then

      useradd ${USERNAME}

      usermod ${USERNAME} -G docker

    else

      usermod ${USERNAME} -G docker

    fi

    docker_install_success_info

  fi

}

ubuntu_install_docker(){

  grep "Ubuntu" /etc/issue &> /dev/null

  if [ $? -eq 0 ];then

    /bin/echo  "当前系统是`cat /etc/issue`,即将开始系统初始化、配置docker-compose与安装docker" && sleep 1

    \cp ${DIR}/limits.conf /etc/security/limits.conf

    \cp ${DIR}/sysctl.conf /etc/sysctl.conf

     

    /bin/tar xvf ${DOCKER_FILE}

    \cp docker/*  /usr/local/bin

    mkdir /etc/docker && \cp daemon.json /etc/docker

    \cp containerd.service /lib/systemd/system/containerd.service

    \cp docker.service  /lib/systemd/system/docker.service

    \cp docker.socket /lib/systemd/system/docker.socket

    \cp ${DIR}/docker-compose-Linux-x86_64_1.28.6 /usr/bin/docker-compose

    groupadd docker && useradd docker -r -m -s /sbin/nologin -g docker

    id -u  ${USERNAME} &> /dev/null

    if [ $? -ne 0 ];then

      groupadd  -r  ${USERNAME}

      useradd -r -m -s /bin/bash -g ${USERNAME} ${USERNAME}

      usermod ${USERNAME} -G docker

    else

      usermod ${USERNAME} -G docker

    fi 

    docker_install_success_info

  fi

}

ubuntu_install_containerd(){

  DIR=`pwd`

  PACKAGE_NAME="containerd-1.6.20-linux-amd64.tar.gz"

  CONTAINERD_FILE=${DIR}/${PACKAGE_NAME}

  NERDCTL="nerdctl-1.3.0-linux-amd64.tar.gz"

  CNI="cni-plugins-linux-amd64-v1.2.0.tgz"

  RUNC="runc.amd64"

   

  mkdir -p /etc/containerd /etc/nerdctl

  tar xvf ${CONTAINERD_FILE} &&  cp bin/* /usr/local/bin/

  \cp runc.amd64   /usr/bin/runc && chmod  a+x /usr/bin/runc

  \cp config.toml  /etc/containerd/config.toml

  \cp containerd.service /lib/systemd/system/containerd.service

  #CNI

  mkdir  /opt/cni/bin -p

  tar xvf ${CNI}  -C  /opt/cni/bin/

  #nerdctl

  tar xvf ${NERDCTL}  -C /usr/local/bin/

  \cp nerdctl.toml /etc/nerdctl/nerdctl.toml

  containerd_install_success_info

}

containerd_install_success_info(){

    /bin/echo "正在启动containerd server并设置为开机自启动!"

    #start containerd  service

    systemctl daemon-reload && systemctl  restart  containerd && systemctl  enable containerd

    /bin/echo "containerd is:" `systemctl  is-active  containerd`

    sleep 0.5 && /bin/echo "containerd server安装完成,欢迎进入containerd的容器世界!" && sleep 1

}

docker_install_success_info(){

    /bin/echo "正在启动docker server并设置为开机自启动!"

    systemctl  enable containerd.service && systemctl  restart containerd.service

    systemctl  enable docker.service && systemctl  restart docker.service

    systemctl  enable docker.socket && systemctl  restart docker.socket

    sleep 0.5 && /bin/echo "docker server安装完成,欢迎进入docker世界!" && sleep 1

}

usage(){

    echo "使用方法为[shell脚本  containerd|docker]"

}

main(){

  RUNTIME=$1

  case ${RUNTIME}  in

    docker)

      centos_install_docker 

      ubuntu_install_docker

      ;;

    containerd)

      ubuntu_install_containerd

      ;;

    *)

      usage;

    esac;

}

main $1

root@k8s-master01:/usr/local/src#

  提示:上述脚本主要用几个函数来描述了在centos和ubuntu系统之上安装docker、docker-compose和containerd相关步骤;使用该脚本,需要将所需安装二进制包,配置文件都放置与脚本同一目录;

  运行脚本安装containerd

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

root@k8s-master01:/usr/local/src# sh runtime-install.sh containerd

bin/

bin/containerd-shim

bin/containerd-shim-runc-v1

bin/containerd-stress

bin/containerd

bin/ctr

bin/containerd-shim-runc-v2

./

./loopback

./bandwidth

./ptp

./vlan

./host-device

./tuning

./vrf

./sbr

./dhcp

./static

./firewall

./macvlan

./dummy

./bridge

./ipvlan

./portmap

./host-local

nerdctl

containerd-rootless-setuptool.sh

containerd-rootless.sh

正在启动containerd server并设置为开机自启动!

Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.

containerd is: active

containerd server安装完成,欢迎进入containerd的容器世界!

root@k8s-master01:/usr/local/src#

  验证:containerd是否安装成功,nerdctl是否可正常使用?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

root@k8s-master01:/usr/local/src# systemctl status containerd

● containerd.service - containerd container runtime

     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)

     Active: active (running) since Wed 2023-04-12 08:13:02 UTC; 59s ago

       Docs: https://containerd.io

   Main PID: 1136 (containerd)

      Tasks: 10

     Memory: 13.7M

        CPU: 609ms

     CGroup: /system.slice/containerd.service

             └─1136 /usr/local/bin/containerd

Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.134059545Z" level=info msg="containerd successfully booted in 0.032924s"

Apr 12 08:13:02 k8s-master01.ik8s.cc systemd[1]: Started containerd container runtime.

Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.133927633Z" level=info msg="Start recovering state"

Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.135031130Z" level=info msg="Start event monitor"

Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.135070328Z" level=info msg="Start snapshots syncer"

Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.135083377Z" level=info msg="Start cni network conf syncer for default"

Apr 12 08:13:02 k8s-master01.ik8s.cc containerd[1136]: time="2023-04-12T08:13:02.135089191Z" level=info msg="Start streaming server"

Apr 12 08:13:02 k8s-master01.ik8s.cc systemd[1]: /lib/systemd/system/containerd.service:1: Assignment outside of section. Ignoring.

Apr 12 08:13:02 k8s-master01.ik8s.cc systemd[1]: /lib/systemd/system/containerd.service:1: Assignment outside of section. Ignoring.

Apr 12 08:14:02 k8s-master01.ik8s.cc systemd[1]: /lib/systemd/system/containerd.service:1: Assignment outside of section. Ignoring.

root@k8s-master01:/usr/local/src# nerdctl ps -a

CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES

root@k8s-master01:/usr/local/src#

  跑一个nginx容器看看是否可以正常跑起来?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

root@k8s-master01:/usr/local/src# nerdctl ps -a

CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES

docker.io/library/nginx:latest:                                                   resolved       |++++++++++++++++++++++++++++++++++++++|

index-sha256:dbf632af6963e56f6b3fc4196578b75742482490c236f5009b3e68cf93a62997:    done           |++++++++++++++++++++++++++++++++++++++|

manifest-sha256:bfb112db4075460ec042ce13e0b9c3ebd982f93ae0be155496d050bb70006750: done           |++++++++++++++++++++++++++++++++++++++|

config-sha256:080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b:   done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:9862f2ee2e8cd9dab487d7dc2152a3f76cb503772dfb8e830973264340d6233e:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:f1f26f5702560b7e591bef5c4d840f76a232bf13fd5aefc4e22077a1ae4440c7:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:86b2457cc2b0d68200061e3420623c010de5e6fb184e18328a46ef22dbba490a:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:7f7f30930c6b1fa9e421ba5d234c3030a838740a22a42899d3df5f87e00ea94f:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:e1eeb0f1c06b25695a5b9df587edf4bf12a5af9432696811dd8d5fcfd01d7949:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:2836b727df80c28853d6c505a2c3a5959316e48b1cff42d98e70cb905b166c82:    done           |++++++++++++++++++++++++++++++++++++++|

elapsed: 18.5s                                                                    total:  54.4 M (2.9 MiB/s)                                      

83f7da147436d7f621fd85e9546f44beed9511525f17bf1dc2c01230108d31a9

root@k8s-master01:/usr/local/src# nerdctl ps

CONTAINER ID    IMAGE                             COMMAND                   CREATED           STATUS    PORTS                 NAMES

83f7da147436    docker.io/library/nginx:latest    "/docker-entrypoint.…"    12 seconds ago    Up        0.0.0.0:80->80/tcp    nginx-83f7d

root@k8s-master01:/usr/local/src#

  浏览器访问master01的IP地址80端口,看看nginx是否能够正常访问?

  提示:能够看到我们刚才跑的nginx容器是可以正常访问的,说明我们基于自动化脚本部署的containerd容器环境已经准备好了;

  删除容器测试

1

2

3

4

5

6

7

8

root@k8s-master01:/usr/local/src# nerdctl ps

CONTAINER ID    IMAGE                             COMMAND                   CREATED          STATUS    PORTS                 NAMES

83f7da147436    docker.io/library/nginx:latest    "/docker-entrypoint.…"    4 minutes ago    Up        0.0.0.0:80->80/tcp    nginx-83f7d

root@k8s-master01:/usr/local/src# nerdctl rm -f 83f7da147436

83f7da147436

root@k8s-master01:/usr/local/src# nerdctl ps -a

CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES

root@k8s-master01:/usr/local/src#

  3、配置apt源并安装kubeadm、kubelet、kubectl

1

2

3

4

5

6

root@k8s-master01:~# apt-get update && apt-get install -y apt-transport-https -y

root@k8s-master01:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg |apt-key add -

root@k8s-master01:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

> EOF

root@k8s-master01:~#

  安装kubeadm、kubelet、kubectl

1

root@k8s-master01:~# apt update && apt-get install -y kubeadm=1.26.3-00 kubectl=1.26.3-00 kubelet=1.26.3-00

  提示:添加了软件源以后,必须使用apt update命令来更新源;然后可以使用apt-cache madison kubeadm来查看仓库中所有kubeadm版本,如果要安装指定版本的软件,可以使用上述方式,如果不指定就是安装仓库中最新版本;

  验证kubeadm的版本信息

1

2

3

root@k8s-master01:~# kubeadm  version

kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:38:47Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}

root@k8s-master01:~#

  查看部署k8s所需的镜像

1

2

3

4

5

6

7

8

9

root@k8s-master01:~# kubeadm config images list  --kubernetes-version v1.26.3

registry.k8s.io/kube-apiserver:v1.26.3

registry.k8s.io/kube-controller-manager:v1.26.3

registry.k8s.io/kube-scheduler:v1.26.3

registry.k8s.io/kube-proxy:v1.26.3

registry.k8s.io/pause:3.9

registry.k8s.io/etcd:3.5.6-0

registry.k8s.io/coredns/coredns:v1.9.3

root@k8s-master01:~#

  提示:上述列出部署k8s v1.26.3版本所需镜像;上述镜像在国内需要代理才可以正常下载,解决方案有两个,第一个是使用国内阿里云的镜像仓库中的镜像;第二就是使用代理;

  第一种方式,使用阿里云镜像仓库中的镜像

1

2

3

4

5

6

7

8

9

root@k8s-master01:~# kubeadm config images list  --kubernetes-version v1.26.3 --image-repository="registry.aliyuncs.com/google_containers"

registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.3

registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3

registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.3

registry.aliyuncs.com/google_containers/kube-proxy:v1.26.3

registry.aliyuncs.com/google_containers/pause:3.9

registry.aliyuncs.com/google_containers/etcd:3.5.6-0

registry.aliyuncs.com/google_containers/coredns:v1.9.3

root@k8s-master01:~#

  提示:可以使用--image-repository选项来指定镜像仓库地址;下载和初始化集群master节点的时候也是用相同选项指定镜像仓库地址即可;

  第二种方式,在crt终端使用export 导出代理配置到当前终端环境变量

1

2

3

root@k8s-master01:~# export https_proxy=http://192.168.0.80:8123

root@k8s-master01:~# export http_proxy=http://192.168.0.80:8123

root@k8s-master01:~#

  提示:后面写自己的代理服务器IP地址和端口;

  验证当前终端是否是代理服务的IP地址?

  测试,使用nerdctl命令行工具,下载registry.k8s.io/etcd:3.5.6-0镜像,看看可以正常下载下来?

  提示:可以看到现在直接下载谷歌仓库中的镜像是没有问题;这种方式网速稍微有点慢,建议使用国内仓镜像库;

  下载部署k8s所需的镜像 并指定从阿里云仓库下载

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

root@k8s-master01:~# kubeadm config images pull  --kubernetes-version v1.26.3 --image-repository="registry.aliyuncs.com/google_containers"

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.3

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.3

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.26.3

[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9

[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0

[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3

root@k8s-master01:~# nerdctl images

REPOSITORY                                                         TAG        IMAGE ID        CREATED           PLATFORM       SIZE         BLOB SIZE

registry.aliyuncs.com/google_containers/coredns                    v1.9.3     8e352a029d30    6 seconds ago     linux/amd64    47.0 MiB     14.2 MiB

registry.aliyuncs.com/google_containers/coredns                    <none>     8e352a029d30    6 seconds ago     linux/amd64    47.0 MiB     14.2 MiB

registry.aliyuncs.com/google_containers/etcd                       3.5.6-0    dd75ec974b0a    11 seconds ago    linux/amd64    289.3 MiB    97.8 MiB

registry.aliyuncs.com/google_containers/etcd                       <none>     dd75ec974b0a    11 seconds ago    linux/amd64    289.3 MiB    97.8 MiB

registry.aliyuncs.com/google_containers/kube-apiserver             v1.26.3    b8dda58b0c68    44 seconds ago    linux/amd64    131.4 MiB    33.7 MiB

registry.aliyuncs.com/google_containers/kube-apiserver             <none>     b8dda58b0c68    44 seconds ago    linux/amd64    131.4 MiB    33.7 MiB

registry.aliyuncs.com/google_containers/kube-controller-manager    v1.26.3    28c0deb96fd8    37 seconds ago    linux/amd64    121.3 MiB    30.7 MiB

registry.aliyuncs.com/google_containers/kube-controller-manager    <none>     28c0deb96fd8    37 seconds ago    linux/amd64    121.3 MiB    30.7 MiB

registry.aliyuncs.com/google_containers/kube-proxy                 v1.26.3    d89b6c6a8ecc    29 seconds ago    linux/amd64    66.9 MiB     20.5 MiB

registry.aliyuncs.com/google_containers/kube-proxy                 <none>     d89b6c6a8ecc    29 seconds ago    linux/amd64    66.9 MiB     20.5 MiB

registry.aliyuncs.com/google_containers/kube-scheduler             v1.26.3    ef87c0880906    33 seconds ago    linux/amd64    57.5 MiB     16.7 MiB

registry.aliyuncs.com/google_containers/kube-scheduler             <none>     ef87c0880906    33 seconds ago    linux/amd64    57.5 MiB     16.7 MiB

registry.aliyuncs.com/google_containers/pause                      3.9        7031c1b28338    27 seconds ago    linux/amd64    732.0 KiB    314.0 KiB

registry.aliyuncs.com/google_containers/pause                      <none>     7031c1b28338    27 seconds ago    linux/amd64    732.0 KiB    314.0 KiB

<none>                                                             <none>     b8dda58b0c68    44 seconds ago    linux/amd64    131.4 MiB    33.7 MiB

<none>                                                             <none>     8e352a029d30    6 seconds ago     linux/amd64    47.0 MiB     14.2 MiB

<none>                                                             <none>     ef87c0880906    33 seconds ago    linux/amd64    57.5 MiB     16.7 MiB

<none>                                                             <none>     d89b6c6a8ecc    29 seconds ago    linux/amd64    66.9 MiB     20.5 MiB

<none>                                                             <none>     28c0deb96fd8    37 seconds ago    linux/amd64    121.3 MiB    30.7 MiB

<none>                                                             <none>     7031c1b28338    27 seconds ago    linux/amd64    732.0 KiB    314.0 KiB

<none>                                                             <none>     dd75ec974b0a    11 seconds ago    linux/amd64    289.3 MiB    97.8 MiB

root@k8s-master01:~#

  4、k8s集群初始化

  master节点初始化

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

root@k8s-master01:~# kubeadm init --apiserver-advertise-address=192.168.0.71 \

>              --apiserver-bind-port=6443 \

>              --kubernetes-version=v1.26.3 \

>              --pod-network-cidr=10.100.0.0/16 \

>              --service-cidr=10.200.0.0/16 \

>              --service-dns-domain=cluster.local \

>              --image-repository="registry.aliyuncs.com/google_containers" \

>              --ignore-preflight-errors=swap

[init] Using Kubernetes version: v1.26.3

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [k8s-master01.ik8s.cc kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.200.0.1 192.168.0.71]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [k8s-master01.ik8s.cc localhost] and IPs [192.168.0.71 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [k8s-master01.ik8s.cc localhost] and IPs [192.168.0.71 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

[control-plane] Creating static Pod manifest for "kube-scheduler"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 7.504393 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node k8s-master01.ik8s.cc as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

[mark-control-plane] Marking the node k8s-master01.ik8s.cc as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

[bootstrap-token] Using token: xc1xea.briuce4ykh8qulcn

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.71:6443 --token xc1xea.briuce4ykh8qulcn \

        --discovery-token-ca-cert-hash sha256:ef06b68c35354849f25b985efb36eefc91dbc6cc1a7591537dd563cfd13e7504

root@k8s-master01:~#

  提示:如果你能看到类似上述信息,表示k8smaster节点初始化成功;

  创建用户家目录/.kube目录,并复制配置文件

1

2

3

4

root@k8s-master01:~# mkdir -p $HOME/.kube

root@k8s-master01:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@k8s-master01:~# chown $(id -u):$(id -g) $HOME/.kube/config

root@k8s-master01:~#

  验证kubectl工具是否可用?

1

2

3

4

5

6

7

8

9

10

11

12

13

root@k8s-master01:~# kubectl get nodes

NAME                   STATUS     ROLES           AGE   VERSION

k8s-master01.ik8s.cc   NotReady   control-plane   79s   v1.26.3

root@k8s-master01:~# kubectl get pods -n kube-system

NAME                                           READY   STATUS    RESTARTS   AGE

coredns-5bbd96d687-822lh                       0/1     Pending   0          71s

coredns-5bbd96d687-mxvth                       0/1     Pending   0          71s

etcd-k8s-master01.ik8s.cc                      1/1     Running   0          85s

kube-apiserver-k8s-master01.ik8s.cc            1/1     Running   0          85s

kube-controller-manager-k8s-master01.ik8s.cc   1/1     Running   0          87s

kube-proxy-bt79n                               1/1     Running   0          71s

kube-scheduler-k8s-master01.ik8s.cc            1/1     Running   0          85s

root@k8s-master01:~#

  提示:能够看到上述信息,表示kubectl工具使用是没有问题的;

  加入node节点

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

root@k8s-node01:~# kubeadm join 192.168.0.71:6443 --token xc1xea.briuce4ykh8qulcn \

>         --discovery-token-ca-cert-hash sha256:ef06b68c35354849f25b985efb36eefc91dbc6cc1a7591537dd563cfd13e7504

[preflight] Running pre-flight checks

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-node01:~#

  提示:能够看到上述信息,表示节点加入成功;

  验证:在master节点使用kubectl get nodes命令,看看对应节点是否都加入集群?

1

2

3

4

5

6

root@k8s-master01:~# kubectl get nodes

NAME                   STATUS     ROLES           AGE     VERSION

k8s-master01.ik8s.cc   NotReady   control-plane   4m31s   v1.26.3

k8s-node01.ik8s.cc     NotReady   <none>          68s     v1.26.3

k8s-node02.ik8s.cc     NotReady   <none>          69s     v1.26.3

root@k8s-master01:~#

  提示:能够看到有两个节点现在已经加入集群;但是状态是notready,这是因为集群还没有部署网络插件;

  5、部署网络组组件calico

   下载网络插件calico的部署清单

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

root@k8s-master01:~# wget https://docs.projectcalico.org/v3.25/manifests/calico.yaml --no-check-certificate

--2023-04-13 12:56:42--  https://docs.projectcalico.org/v3.25/manifests/calico.yaml

Resolving docs.projectcalico.org (docs.projectcalico.org)... 34.142.149.67, 52.74.166.77, 2406:da18:880:3800::c8, ...

Connecting to docs.projectcalico.org (docs.projectcalico.org)|34.142.149.67|:443... connected.

HTTP request sent, awaiting response... 301 Moved Permanently

Location: https://docs.tigera.io/archive/v3.25/manifests/calico.yaml [following]

--2023-04-13 12:56:43--  https://docs.tigera.io/archive/v3.25/manifests/calico.yaml

Resolving docs.tigera.io (docs.tigera.io)... 34.126.184.144, 18.139.194.139, 2406:da18:880:3800::c8, ...

Connecting to docs.tigera.io (docs.tigera.io)|34.126.184.144|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 238089 (233K) [text/yaml]

Saving to: ‘calico.yaml’

calico.yaml                                100%[=======================================================================================>] 232.51K  28.6KB/s    in 8.1s   

2023-04-13 12:56:52 (28.6 KB/s) - ‘calico.yaml’ saved [238089/238089]

root@k8s-master01:~#

  修改calica.yaml

  提示:这个配置必须和集群初始化指定的pod网络相同;

  在calica.yaml中指定网卡信息

  提示:这个根据自己环境中服务器的网卡名称来指定;我这里服务器都是ens33;需要注意的是,修改yaml文件一定要注意格式缩进;

  查看calico所需镜像

1

2

3

4

5

6

7

root@k8s-master01:~# cat calico.yaml |grep image:

          image: docker.io/calico/cni:v3.25.0

          image: docker.io/calico/cni:v3.25.0

          image: docker.io/calico/node:v3.25.0

          image: docker.io/calico/node:v3.25.0

          image: docker.io/calico/kube-controllers:v3.25.0

root@k8s-master01:~#

  提前下载镜像

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

root@k8s-master01:~# nerdctl pull docker.io/calico/cni:v3.25.0

WARN[0000] skipping verifying HTTPS certs for "docker.io"

docker.io/calico/cni:v3.25.0:                                                     resolved       |++++++++++++++++++++++++++++++++++++++|

index-sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977:    done           |++++++++++++++++++++++++++++++++++++++|

manifest-sha256:ec0fa42b4d03398995800b44131b200aee2c76354d405d5d91689ec99cc70c56: done           |++++++++++++++++++++++++++++++++++++++|

config-sha256:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7:   done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:8833c0c1f858ee768478151ab71b3b0a3eeae160963b7d006c05ec9b493e8940:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:8729f736e48f5c644d03591958297cd4c1942b5aaf451074f8cd80bac898149a:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:79eb57bec78a8d14d1085acffe5577fe88b470d38dde295a1aba66d17e663d61:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:bc84ed7b6a651f36d1486db36f1c2c1181b6c14463ea310823e6c2f69d0af100:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:84d025afc533dc367b05ee95125697adff781f24ef1366c522d8d7f65df0319b:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:ae5822c70daca619af58b197f6a3ea6f7cac1b785f6fbea673fb37be4853f6d5:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:df79b6dbf625547000b30f4c62ac5c5133fcb22b7d85d16b6f4bbb3c7733fc27:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:5e4c3414e9caf71bcc544ba2d483d0f05818e82c6a1b900e55ccea1c635fbd7b:    done           |++++++++++++++++++++++++++++++++++++++|

elapsed: 31.5s                                                                    total:  83.9 M (2.7 MiB/s)                                      

root@k8s-master01:~# nerdctl pull docker.io/calico/node:v3.25.0

WARN[0000] skipping verifying HTTPS certs for "docker.io"

docker.io/calico/node:v3.25.0:                                                    resolved       |++++++++++++++++++++++++++++++++++++++|

index-sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6:    done           |++++++++++++++++++++++++++++++++++++++|

manifest-sha256:56db28c3632192f56a1ff1360b83ef640fc8f41fa21a83126194811713e2f022: done           |++++++++++++++++++++++++++++++++++++++|

config-sha256:08616d26b8e74867402274687491e5978ba4a6ded94e9f5ecc3e364024e5683e:   done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:4044e890e577f930490efd4f931549733818f28b5d5f8c63f47617e19a48a177:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:2db093bd3f50f881bd382b13a13d661699ca335fea1a83d167528f85db2e74cd:    done           |++++++++++++++++++++++++++++++++++++++|

elapsed: 39.9s                                                                    total:  83.1 M (2.1 MiB/s)                                      

root@k8s-master01:~# nerdctl pull docker.io/calico/kube-controllers:v3.25.0

WARN[0000] skipping verifying HTTPS certs for "docker.io"

docker.io/calico/kube-controllers:v3.25.0:                                        resolved       |++++++++++++++++++++++++++++++++++++++|

index-sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3:    done           |++++++++++++++++++++++++++++++++++++++|

manifest-sha256:095484425365e4eac24abfdd28ba14d133ffcc782c36b5d4f08533ef75ee91e4: done           |++++++++++++++++++++++++++++++++++++++|

config-sha256:5e785d005ccc1ab22527a783835cf2741f6f5f385a8956144c661f8c23ae9d78:   done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:a26da0f61ecbf48440a5921ea6bc8bafbebc76f139cb43387e6e6a3987505fda:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:0a74accf7d2383a32999addc064ca91f45ad9c94fda36ffe19b58110cf6c43eb:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:cbb0c534f93adbeeb07299bec2596c609cc5828830830fd3b8897580f6d9ff50:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:107bccf16111c3f6c222bd3979f1fbd9102b2cf8b6d205a3fbaf5dabe7ecfc71:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:45e5e2183223e3fb71640062e7bf8e22d72906dc71118746f3b83ba69c550d14:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:9f79d0f3e4006841b078828fb7c68d1c974236839d25c45aaaf06445033a79dc:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:c5c5f58ee63d0b207f01342606edd6f09c09e49e46bb4d990edbc44f5f4beac5:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:4b89205dc639f7d14e32893b3a4051fe6f7f6e8ed4848a74c2c7531038fce120:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:afd04547218a7eed341a7ef3b72bbac362970a4c429b348e7b71b47995cf48ed:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:7bc3468f50bc15474a6d5c338778467c7680043a3bac13b0789f6bda3c9c8c50:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:1724dd4f31db707f61068ee56a270ece4fa8f604b64935cf02fc5a9dac7da88d:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:b57460b249bbbb3e002c07b80d4311a42a8c2ce6ca25fc7c68a3d240afc290be:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:ec7526ad34c20113824da392a3ff4646f38cd5c884ad2871817b1271cf272afe:    done           |++++++++++++++++++++++++++++++++++++++|

elapsed: 78.1s                                                                    total:  29.8 M (391.0 KiB/s)                                    

root@k8s-master01:~#

  在k8smaster上应用calica部署清单

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

root@k8s-master01:~# kubectl apply -f calico.yaml

poddisruptionbudget.policy/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

serviceaccount/calico-node created

configmap/calico-config created

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

daemonset.apps/calico-node created

deployment.apps/calico-kube-controllers created

root@k8s-master01:~#

  验证:查看pod是否正常running?node是否都准备就绪?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

root@k8s-master01:~# kubectl get nodes

NAME                   STATUS   ROLES           AGE   VERSION

k8s-master01.ik8s.cc   Ready    control-plane   44m   v1.26.3

k8s-node01.ik8s.cc     Ready    <none>          40m   v1.26.3

k8s-node02.ik8s.cc     Ready    <none>          40m   v1.26.3

root@k8s-master01:~# kubectl get pods -A

NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE

kube-system   calico-kube-controllers-57b57c56f-nmr7d        1/1     Running   0          39s

kube-system   calico-node-42rcw                              1/1     Running   0          39s

kube-system   calico-node-lwn5w                              1/1     Running   0          39s

kube-system   calico-node-zcnzb                              1/1     Running   0          39s

kube-system   coredns-5bbd96d687-822lh                       1/1     Running   0          44m

kube-system   coredns-5bbd96d687-mxvth                       1/1     Running   0          44m

kube-system   etcd-k8s-master01.ik8s.cc                      1/1     Running   0          44m

kube-system   kube-apiserver-k8s-master01.ik8s.cc            1/1     Running   0          44m

kube-system   kube-controller-manager-k8s-master01.ik8s.cc   1/1     Running   0          44m

kube-system   kube-proxy-67kjq                               1/1     Running   0          41m

kube-system   kube-proxy-bt79n                               1/1     Running   0          44m

kube-system   kube-proxy-l2zz8                               1/1     Running   0          41m

kube-system   kube-scheduler-k8s-master01.ik8s.cc            1/1     Running   0          44m

root@k8s-master01:~#

  提示:能够看到上述node和pod都处ready状态表示网络插件部署完成;这里需要注意一点的是,如果calico-kube-controllers被调度到非master节点上运行的话,需要拷贝master节点用户家目录下.kube/config文件到node节点上~/.kube/config,因为calico-kube-controllers初始化需要连接到k8s集群上,没有这个文件认证通不过会导致calico-kube-controllers初始化不成功;

  复制master家目录的下的.kube目录到node节点,防止calico-kube-controllers调度到node节点初始化不成功

1

2

3

root@k8s-node02:~# scp 192.168.0.71:/root/.kube/config ./.kube/

config                                                                                                                                   100% 5636     5.2MB/s   00:00   

root@k8s-node02:~#

  提示:node3也是相同的操作;

  6、部署官方dashboard

  下载官方dashboard部署清单

1

root@k8s-master01:~# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

  查看清单所需镜像

1

2

3

4

5

root@k8s-master01:~# mv recommended.yaml dashboard-v2.7.0.yaml

root@k8s-master01:~# cat dashboard-v2.7.0.yaml|grep image:

          image: kubernetesui/dashboard:v2.7.0

          image: kubernetesui/metrics-scraper:v1.0.8

root@k8s-master01:~#

  提前下载所需镜像

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

root@k8s-node01:~# nerdctl pull kubernetesui/dashboard:v2.7.0

WARN[0000] skipping verifying HTTPS certs for "docker.io"

docker.io/kubernetesui/dashboard:v2.7.0:                                          resolved       |++++++++++++++++++++++++++++++++++++++|

index-sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93:    done           |++++++++++++++++++++++++++++++++++++++|

manifest-sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029: done           |++++++++++++++++++++++++++++++++++++++|

config-sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558:   done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:8e052fd7e2d0aec4ef51e4505d006158414775ad5f0ea3e479ac0ba92f90dfff:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:ee3247c7e545df975ba3826979c7a8d73f1373cbb3ac47def3b734631cef2965:    done           |++++++++++++++++++++++++++++++++++++++|

elapsed: 61.8s                                                                    total:  72.3 M (1.2 MiB/s)                                      

root@k8s-node01:~# nerdctl pull kubernetesui/metrics-scraper:v1.0.8

WARN[0000] skipping verifying HTTPS certs for "docker.io"

docker.io/kubernetesui/metrics-scraper:v1.0.8:                                    resolved       |++++++++++++++++++++++++++++++++++++++|

index-sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c:    done           |++++++++++++++++++++++++++++++++++++++|

manifest-sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a: done           |++++++++++++++++++++++++++++++++++++++|

config-sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7:   done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:5866d2c04d960790300cbd8b18d67be6b930870d044dd75849c8c96191fe7580:    done           |++++++++++++++++++++++++++++++++++++++|

layer-sha256:978be80e3ee3098e11be2b18322822513d692988440ec1e74620e8539b07704d:    done           |++++++++++++++++++++++++++++++++++++++|

elapsed: 52.1s                                                                    total:  18.8 M (370.1 KiB/s)                                    

root@k8s-node01:~#

  应用配置清单

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

root@k8s-master01:~# kubectl apply -f dashboard-v2.7.0.yaml

namespace/kubernetes-dashboard created

serviceaccount/kubernetes-dashboard created

service/kubernetes-dashboard created

secret/kubernetes-dashboard-certs created

secret/kubernetes-dashboard-csrf created

secret/kubernetes-dashboard-key-holder created

configmap/kubernetes-dashboard-settings created

role.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created

rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created

deployment.apps/kubernetes-dashboard created

service/dashboard-metrics-scraper created

deployment.apps/dashboard-metrics-scraper created

root@k8s-master01:~#

  验证pod是否都running?

  提示:能够看到这两个pod正常running,表示dashboard部署成功;

  创建用户和密钥

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

root@k8s-master01:~# cat admin-user.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

  name: admin-user

  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: admin-user

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: admin-user

  namespace: kubernetes-dashboard

root@k8s-master01:~# cat admin-secret.yaml

apiVersion: v1

kind: Secret

type: kubernetes.io/service-account-token

metadata:

  name: dashboard-admin-user

  namespace: kubernetes-dashboard

  annotations:

    kubernetes.io/service-account.name: "admin-user"

root@k8s-master01:~#

  应用及配置清单

1

2

3

4

5

6

root@k8s-master01:~# kubectl apply -f admin-user.yaml

serviceaccount/admin-user created

clusterrolebinding.rbac.authorization.k8s.io/admin-user created

root@k8s-master01:~# kubectl apply -f admin-secret.yaml         

secret/dashboard-admin-user created

root@k8s-master01:~#

  验证:查看admin-user的token信息

  提示:在集群上能够查到上述用户和token信息,表示我们创建用户和secret成功;

  查看dashboard服务 

  提示:从上图可以看到dashboard通过nodeport将主机30000端口和pod的443端口绑定,这告诉我们访问集群任意节点的30000端口都可以访问到dashboard;

  验证:通过访问集群任意节点的30000端口,看看是否能够访问到dashboard?

  通过token登录dashboard

  提示:能够通过token登录到dashboard,能够看到集群相关信息,说明dashboard部署没有问题,同时基于containerd和kubeadm部署的k8s集群也没有问题;

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/61602.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

关于Postman如何配置随请求携带token

文章目录 一些吐槽实际应用 一些吐槽 首先吐槽一下 postman官网的文档说明&#xff0c;真是乱七八糟&#xff0c;一点都不清晰&#xff0c;能不能好好写用户手册啊&#xff08;比如把用户都当作初始小白&#xff09; 然后吐槽一下网上铺天盖地让我写js脚本应用全局access toke…

Oracle 知识篇+会话级全局临时表在不同连接模式中的表现

标签&#xff1a;会话级临时表、全局临时表、幻读释义&#xff1a;Oracle 全局临时表又叫GTT ★ 结论 ✔ 专用服务器模式&#xff1a;不同应用会话只能访问自己的数据 ✔ 共享服务器模式&#xff1a;不同应用会话只能访问自己的数据 ✔ 数据库驻留连接池模式&#xff1a;不同应…

kafka是有序的吗?如何保证有序?

首先&#xff0c;Kafka无法保证消息的全局有序性&#xff0c;这是因为Kafka的设计中允许多个生产者并行地向同一个主题写入消息。而且&#xff0c;一个主题可能会被划分为多个分区&#xff0c;每个分区都可以在独立的生产者和消费者之间进行并行处理。因此&#xff0c;生产者将…

js玩儿爬虫

前言 提到爬虫可能大多都会想到python&#xff0c;其实爬虫的实现并不限制任何语言。 下面我们就使用js来实现&#xff0c;后端为express&#xff0c;前端为vue3。 实现功能 话不多说&#xff0c;先看结果&#xff1a; 这是项目链接&#xff1a;https://gitee.com/xi1213/w…

为什么说要拥抱内容生成器ChatGPT,以学生为例说明

精灵从瓶子里出来了。 潘多拉的魔盒已经被打开。 纸已经包不住火了。 无论你喜欢用什么比喻来说明那些不可逆转的事情&#xff0c;现在都可以就把它们用到人工智能的出现。使用ChatGPT等AI工具生成的内容就在那里&#xff0c;而且不仅不会消失&#xff0c;还会越来越个性化&…

【80天学习完《深入理解计算机系统》】第三天 2.3 整数运算【正负溢出】【运算的溢出】【类型转换的二进制扩展】

专注 效率 记忆 预习 笔记 复习 做题 欢迎观看我的博客&#xff0c;如有问题交流&#xff0c;欢迎评论区留言&#xff0c;一定尽快回复&#xff01;&#xff08;大家可以去看我的专栏&#xff0c;是所有文章的目录&#xff09;   文章字体风格&#xff1a; 红色文字表示&#…

ICC2:set_target_library_subset指定hier cell使用不同library cell

我正在「拾陆楼」和朋友们讨论有趣的话题&#xff0c;你⼀起来吧&#xff1f; 拾陆楼知识星球 给不同module/voltage area设定不同track 高度的cell&#xff0c;或者不同电压的cell&#xff0c;可以使用set_target_library_subset命令。 prompt> set_target_library_subse…

ppt怎么压缩?试试这样压缩文件

当PPT文件体积过大时&#xff0c;打开的速度就会很慢&#xff0c;演示的时候刘程度也会受到影响&#xff0c;其次&#xff0c;现在很多平台对于上传的文件是有大小限制的&#xff0c;比如超过100M的文件就无法上传、发送等等&#xff0c;那么&#xff0c;怎么才能压缩PPT文件呢…

关于MySQL中的binlog

介绍 undo log 和 redo log是由Inno DB存储引擎生成的。 在MySQL服务器架构中&#xff0c;分为三层&#xff1a;连接层、服务层&#xff08;server层&#xff09;、执行层&#xff08;存储引擎层&#xff09; bin log 是 binary log的缩写&#xff0c;即二进制日志。 MySQL…

竞赛项目 深度学习手势识别算法实现 - opencv python

文章目录 1 前言2 项目背景3 任务描述4 环境搭配5 项目实现5.1 准备数据5.2 构建网络5.3 开始训练5.4 模型评估 6 识别效果7 最后 1 前言 &#x1f525; 优质竞赛项目系列&#xff0c;今天要分享的是 &#x1f6a9; 深度学习手势识别算法实现 - opencv python 该项目较为新颖…

数据库新闻速递 -- POSTGRESQL 正在蚕食数据库市场 (翻译)

开头还是介绍一下群&#xff0c;如果感兴趣polardb ,mongodb ,mysql ,postgresql ,redis 等有问题&#xff0c;有需求都可以加群群内有各大数据库行业大咖&#xff0c;CTO&#xff0c;可以解决你的问题。加群请加 liuaustin3微信号 &#xff0c;在新加的朋友会分到3群&#xff…

【大数据】Flink 详解(二):核心篇 Ⅰ

Flink 详解&#xff08;二&#xff09;&#xff1a;核心篇 Ⅰ 14、Flink 的四大基石是什么&#xff1f; ​ Flink 的四大基石分别是&#xff1a; Checkpoint&#xff08;检查点&#xff09;State&#xff08;状态&#xff09;Time&#xff08;时间&#xff09;Window&#xff…