一键部署k8s集群

前置动作

关闭防火墙

systemctl disable firewalld && systemctl stop firewalld

关闭SELinux

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config && grep 'SELINUX=disabled' /etc/selinux/config
setenforce 0 
getenforce

 关闭swap

# 关闭swap 
swapoff -a
# 永久关闭swap 
sed -ri 's/.*swap.*/#&/' /etc/fstab 

部署k8s集群

机器至少配置

序号

类型

主机名

IP

备注(CPU/内存/硬盘)

1

Mater

k8s-api.bcs.local

192.168.46.128

8C16G,100G

2

Node1

node-192-168-46-129

192.168.46.129

4C8G,100G

3

Node2

node-192-168-46-130

192.168.46.130

4C8G,100G

4

Node3

node-192-168-46-131

192.168.46.131

4C8G,100G

软件需求

需求项具体要求检查命令
操作系统CentOS 7.9 64 位cat /etc/centos-release
kernel3.10.0 及以上uname -r
Swap关闭。防止 io 飙升影响 kubelet 进程。free -m Swap 这行值为 0
防火墙关闭iptables -vnL 无其他规则
SELinux关闭。k8s 官方要求。getenforce 的输出为 Disabled
时区所有服务器时区应该统一,建议使用北京时间使用 timedatectl set-timezone Asia/Shanghai 设置为北京时间。
时间同步etcd 选举时要求节点间时间差小于 1s配置 chronyd 同步时间
docker 版本19.03 及更高docker version
kubenetes 版本限 1.18 或 1.20,其他版本未经测试。用户报告 1.22 以上版本不兼容,1.17 版本部署 bcs 会失败。kubectl version

部署初始 master

 一键部署的bcs.sh脚本内容如下:

#!/usr/bin/env bash
# Usage: Install BCS# bcs
BCS_DIR="$HOME/.bcs"
BK_HOME=${BK_HOME:-/data/bcs}
bcs_override=${bcs_override:-false}
bcs_sysctl=${bcs_sysctl:-1}
# yum 
MIRROR_IP=${MIRROR_IP:-null}
# 本地yum源仓库
# 可配置本地的yum源仓库比如: MIRROR_URL=${MIRROR_URL:-http://192.168.10.41/local/}
# 以下配置腾讯的yum源仓库
MIRROR_URL=${MIRROR_URL:-https://mirrors.tencent.com}
# helm  
BKREPO_URL=${BKREPO_URL:-null}
# docker  
DOCKER_LIB=${DOCKER_LIB:-${BK_HOME}/lib/docker}
DOCKER_VERSION=${DOCKER_VERSION:-19.03.9}
DOCKER_LIVE_RESTORE=${DOCKER_LIVE_RESTORE:-false}
# 可配置本地的yum源仓库比如: REPO_MIRRORS=${REPO_MIRRORS:-[\"http://192.168.10.41/local/\"]}
REPO_MIRRORS=${REPO_MIRRORS:-[\"https://mirror.ccs.tencentyun.com\"]}
DOCKER_BRIDGE=${DOCKER_BRIDGE:-null}
# 可配置本地docker仓库,比如:
# BK_PUBLIC_REPO=${BK_PUBLIC_REPO:-192.168.10.42:5000}
# BK_RELEASE_REPO=${BK_RELEASE_REPO:-192.168.10.42:5000}
BK_PUBLIC_REPO=${BK_PUBLIC_REPO:-hub.bktencent.com}
BK_RELEASE_REPO=${BK_RELEASE_REPO:-hub.bktencent.com/blueking}
# k8s 
KUBELET_LIB=${KUBELET_LIB:-${BK_HOME}/lib/kubelet}
BCS_K8S_CTRL_IP=${BCS_K8S_CTRL_IP:-$LAN_IP}
K8S_VER=${K8S_VER:-1.20.11}
K8S_SVC_CIDR=${K8S_SVC_CIDR:-10.96.0.0/12}
K8S_POD_CIDR=${K8S_POD_CIDR:-10.244.0.0/16}
K8S_EXTRA_ARGS=${K8S_EXTRA_ARGS:-allowed-unsafe-sysctls: 'net.ipv4.tcp_tw_reuse'}
ETCD_LIB=${ETCD_LIB:-${BK_HOME}/lib/etcd}
BCS_CP_WORKER=${BCS_CP_WORKER:-0}
K8S_CNI=${K8S_CNI:-flannel}
join_cmd_b64=${join_cmd_b64:-null}
cluster_env=${cluster_env:-null}
master_join_cmd_b64=${master_join_cmd_b64:-null}# 如果配置本地yum源,需要写入配置的yum源地址,如果没有则忽略
# rm -f /etc/yum.repos.d/*.repo 
# cat >/etc/yum.repos.d/CentOS-Base.repo<<EOF 
# [centos]
# name=CentOS-releasever - Base
# baseurl=http://192.168.10.41/local/   
# gpgcheck=0       
# enabled=1		
# EOF
# yum clean all
# yum makecache
# yum repolist# 安全模式
set -euo pipefail # 重置PATH
PATH=/usr/local/bin:/usr/local/sbin:/usr/sbin:/usr/bin:/sbin:/bin
export PATH# 通用脚本框架变量
#SELF_DIR=$(dirname "$(readlink -f "$0")")
#PROGRAM=$(basename "$0")
VERSION=1.0
EXITCODE=0
OP_TYPE=
LAN_IP=# 全局默认变量
PROJECTS=( bcsenv op helm k8smaster k8snode )
PROJECT=
ON_CLOUD="bare-metal"# error exit handler
err_trap_handler () {MYSELF="$0"LASTLINE="$1"LASTERR="$2"echo "${MYSELF}: line ${LASTLINE} with exit code ${LASTERR}" >&2
}
trap 'err_trap_handler ${LINENO} $?' ERRusage () {cat <<EOF
用法: bcs.sh  [ -h --help -?      查看帮助 ][ -i, --install     支持安装模块(${PROJECTS[*]}) ][ -c, --clean       清理安装模块(${PROJECTS[*]}) ][ -r, --render      渲染模块配置(${PROJECTS[*]}) ][ -v, --version     [可选] 查看脚本版本号 ]
EOF
}usage_and_exit () {usageexit "$1"
}log () {echo "[INFO]: $*"
}warning () {echo "[WARN]: $*" 1>&2EXITCODE=$((EXITCODE + 1))
}version () {echo "bcs.sh version $VERSION"
}highlight () {echo -e "\033[7m  $*  \033[0m"
}error () {highlight "[ERROR]: $*" 1>&2usage_and_exit 1
}ok_bcs () {cat <<EOF$(log "LAN_IP: $LAN_IP"highlight "Welcome to BCS on $ON_CLOUD"
)
EOF
}bye_bcs () {cat <<EOF$(highlight "Finish"
)
EOF
}_retry () {local n=1local max=2local delay=1while true; doif "$@"; thenbreakelif (( n < max )); then((n++))warning "Command failed. Attempt $n/$max:"sleep $delay;elseerror "The command $* has failed after $n attempts."fidone
}### 运维相关配置install_op () {_install_commonop_kubeadmop_kubectlop_minikubeop_helmop_bkrepo "${BKREPO_URL}"log "Complete"
}_install_common () {if ! rpm -q bash-completion &>/dev/null; then yum -y install bash-completion || error "Install bash-completion Failed"fi
}op_kubeadm () {#检测kubeadm版本if command -v kubeadm &>/dev/null; thensed -ri '/bcs config begin for kubeadm/,/bcs config end for kubeadm/d' "$BCS_DIR/bcs.env"cat >> "$BCS_DIR/bcs.env" << 'EOF'
# bcs config begin for kubeadm
# kubeadm 命令补全
source <(kubeadm completion bash)
# bcs config end for kubeadm
EOFfi
}op_kubectl () {if command -v kubectl &>/dev/null; thensed -ri '/bcs config begin for kubectl/,/bcs config end for kubectl/d' "$BCS_DIR/bcs.env"cat >> "$BCS_DIR/bcs.env" << 'EOF'
# bcs config begin for kubectl
# kubectl 命令补全
source <(kubectl completion bash)
# bcs config end for kubectl
EOFfi
}op_minikube () {if command -v minikube &>/dev/null; thensed -ri '/bcs config begin for minikube/,/bcs config end for minikube/d' "$BCS_DIR/bcs.env"cat >> "$BCS_DIR/bcs.env" << 'EOF'
# bcs config begin for minikube
# minikube 命令补全
source <(minikube completion bash)
# bcs config end for minikube
EOFfi
}op_helm () {if command -v helm &>/dev/null; thensed -ri '/bcs config begin for helm/,/bcs config end for helm/d' "$BCS_DIR/bcs.env"cat >> "$BCS_DIR/bcs.env" << 'EOF'
# bcs config begin for helm
# Helm 命令补全
source <(helm completion bash)
# Helm 激活对 OCI 的支持
export HELM_EXPERIMENTAL_OCI=1
# bcs config end for helm
EOFfi
}op_bkrepo () {local BKREPO_URL="$1"if command -v helm &>/dev/null; thenif [[ $BKREPO_URL == "null" ]]; thenwarning "BKREPO_URL is ${BKREPO_URL}, skipping"return 0fihighlight "Add bkrepo: ${BKREPO_URL}"# 如果是使用本地repo仓库,需要注释下面两行,如果是使用腾讯的不需要注释helm repo add bk "${BKREPO_URL}"helm repo updatelog "bkrepo added"elsewarning "Add bkrepo: helm not found, skipping"return 0fi
}clean_op () {helm repo remove bkrepo || warning "remove bkrepo failed"clean_bcsenv
}### 环境/系统初始化install_bcsenv () {local bcs_override=true_on_cloud_add_sysctl_add_hostscat -n "$BCS_DIR/bcs.env"_init_kubeadmconfiglog "Complete"
}_init_kubeadmconfig () {local join_cmdlocal node_namelocal node_type# 参数检查[[ -n ${BCS_K8S_CTRL_IP} ]] || error "Kubernetes控制平面IP未指定"if [[ ${join_cmd_b64} != "null" ]]; thenjoin_cmd="$(echo -n "${join_cmd_b64}" | base64 -d)"echo -n "${join_cmd}" | grep -q "kubeadm join" || error "添加节点命令参数异常"node_name="node-$(echo "$LAN_IP" | tr '.' '-')"node_type="JoinConfiguration"elif [[ ${master_join_cmd_b64} != "null" ]]; thenjoin_cmd="$(echo -n "${master_join_cmd_b64}" | base64 -d)"echo -n "${join_cmd}" | grep -q "kubeadm join" || error "master扩容命令参数异常"node_name="master-$(echo "$LAN_IP" | tr '.' '-')"node_type="JoinConfiguration"elsenode_name="master-$(echo "$LAN_IP" | tr '.' '-')"node_type="InitConfiguration"ficat > "$BCS_DIR/kubeadm-config" << EOF
apiVersion: kubeadm.k8s.io/$([[ $K8S_VER =~ ^1.12 ]] && { echo "v1alpha3"; exit; }[[ $K8S_VER =~ ^1.1[3|4] ]] && { echo "v1beta1"; exit; }[[ $K8S_VER =~ ^1.(1[5-9]|2[0-2]) ]] && { echo "v1beta2"; exit; }
)
apiServer:extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: k8s-api.bcs.local:6443
controllerManager: {}
dns:type: CoreDNS
etcd:local:dataDir: ${ETCD_LIB}
# 如果使用本地镜像仓库,需要改成自己镜像仓库路径,比如:imageRepository: ${BK_PUBLIC_REPO}
imageRepository: ${BK_PUBLIC_REPO}/k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v${K8S_VER}
networking:dnsDomain: cluster.localpodSubnet: ${K8S_POD_CIDR}serviceSubnet: ${K8S_SVC_CIDR}
scheduler: {}
---
apiVersion: kubeadm.k8s.io/$([[ $K8S_VER =~ ^1.12 ]] && { echo "v1alpha3"; exit; }[[ $K8S_VER =~ ^1.1[3|4] ]] && { echo "v1beta1"; exit; }[[ $K8S_VER =~ ^1.(1[5-9]|2[0-2]) ]] && { echo "v1beta2"; exit; }
)
kind: $node_type
nodeRegistration:name: $node_namekubeletExtraArgs:root-dir: ${KUBELET_LIB}
$(if [[ -n ${K8S_EXTRA_ARGS} ]]; thencat << EOFF${K8S_EXTRA_ARGS}
EOFFfi
)
$(if [[ $K8S_VER =~ ^1.12 ]]; thencat << EOFFpod-infra-container-image: ${BK_PUBLIC_REPO}/k8s.gcr.io/pause:3.1
EOFFfiif [[ $K8S_VER =~ ^1.12 ]] && [[ $node_type == "JoinConfiguration" ]]; thencat << EOFF
#discoveryToken: $(echo ${join_cmd} | grep -Po '(?<=discovery-token-ca-cert-hash )sha256:[a-z0-9]{64}' )
discoveryTokenAPIServers:
- k8s-api.bcs.local:6443
discoveryTokenUnsafeSkipCAVerification: true
tlsBootstrapToken: $(echo ${join_cmd} | grep -Po '(?<=token )[a-z0-9.]{23}' )
token: $(echo ${join_cmd} | grep -Po '(?<=token )[a-z0-9.]{23}' )
EOFFelif [[ $node_type == "JoinConfiguration" ]]; thencat << EOFF
discovery:bootstrapToken:apiServerEndpoint: k8s-api.bcs.local:6443caCertHashes:- $(echo ${join_cmd} | grep -Po '(?<=discovery-token-ca-cert-hash )sha256:[a-z0-9]{64}' )token: $(echo ${join_cmd} | grep -Po '(?<=token )[a-z0-9.]{23}' )
EOFFif [[ $node_name =~ ^master ]]; thencat << EOFF
controlPlane:certificateKey: $(echo ${join_cmd} | grep -Po '(?<=certificate-key )[a-z0-9]{64}' )
EOFFfifi
)
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
$(if ! [[ $BCS_K8S_CTRL_IP =~ $LAN_IP ]]; thencat << EOFF
ipvs:excludeCIDRs:- "$BCS_K8S_CTRL_IP/32"
EOFFfi
)
EOFhighlight "$node_name: init bcsenv"
}_on_baremetal () {log "NOT on cloud"[[ -n $LAN_IP ]] || LAN_IP=$(ip -4 -o route get 10/8 | sed -n 's/.*src \([0-9.]\+\).*/\1/p')_init_bcsenv
}_on_cloud () {install -dv "${BCS_DIR}" || warning "create ${BCS_DIR} dir failed"touch "${BCS_DIR}/bcs.env"if [[ $bcs_override != "true" ]]; then#set -a# shellcheck disable=SC1091source "$BCS_DIR/bcs.env"#set +a[[ -z $LAN_IP ]] || return 0 fiQCLOUD_META_API="http://169.254.0.23/latest/meta-data"AWS_META_API="http://169.254.169.254/latest/meta-data"local META_APIif curl -m 2 -qIfs "${QCLOUD_META_API}" >/dev/null; thenON_CLOUD="qcloud"META_API="${QCLOUD_META_API}"elif curl -m 2 -Ifs "${AWS_META_API}" >/dev/null; thenON_CLOUD="aws"META_API="${AWS_META_API}" else_on_baremetalreturn 0fiLAN_IP="$( curl -sSf ${META_API}/local-ipv4 )"[[ -n $LAN_IP ]] || LAN_IP=$(ip -4 -o route get 10/8 | sed -n 's/.*src \([0-9.]\+\).*/\1/p')case "$ON_CLOUD" inqcloud)# 配置本地yum源的需要改成自己的yum仓库  mirror_url="http://192.1168.10.41/local/"mirror_url="http://mirrors.tencentyun.com";;aws)# 配置本地yum源的需要改成自己的yum仓库  mirror_url="http://192.1168.10.41/local/"mirror_url="https://mirrors.tencent.com";;esac_init_bcsenv
}_init_bcsenv () {highlight "Add envfile"# shellcheck disable=SC1090[[ ${cluster_env} == "null" ]] || source <( echo "${cluster_env}" | base64 -d )[[ -n ${MIRROR_URL} ]] || MIRROR_URL=${mirror_url}# local LAN_IP="$1"# local MIRROR_URL="$2"cat > "$BCS_DIR/bcs.env" << EOF
# bcs config begin for $ON_CLOUD
ON_CLOUD="${ON_CLOUD}"
BCS_DIR="${BCS_DIR}"
BK_HOME="${BK_HOME}"
bcs_sysctl="${bcs_sysctl}"
MIRROR_IP="${MIRROR_IP}"
MIRROR_URL="${MIRROR_URL}"
BKREPO_URL="${BKREPO_URL}"
DOCKER_LIB="${DOCKER_LIB}"
DOCKER_VERSION="${DOCKER_VERSION}"
DOCKER_LIVE_RESTORE="${DOCKER_LIVE_RESTORE}"
REPO_MIRRORS='${REPO_MIRRORS}'
DOCKER_BRIDGE="${DOCKER_BRIDGE}"
BK_PUBLIC_REPO="${BK_PUBLIC_REPO}"
BK_RELEASE_REPO="${BK_RELEASE_REPO}"
KUBELET_LIB="${KUBELET_LIB}"
K8S_VER="${K8S_VER}"
K8S_SVC_CIDR="${K8S_SVC_CIDR}"
K8S_POD_CIDR="${K8S_POD_CIDR}"
K8S_EXTRA_ARGS="${K8S_EXTRA_ARGS}"
ETCD_LIB="${ETCD_LIB}"
LAN_IP="${LAN_IP}"
BCS_K8S_CTRL_IP="${BCS_K8S_CTRL_IP:-$LAN_IP}"
# bcs config end for $ON_CLOUD
EOFsed -ri "/bcs config begin for $ON_CLOUD/,/bcs config end for $ON_CLOUD/d" "$HOME/.bashrc"cat >> "$HOME/.bashrc" << EOF
# bcs config begin for $ON_CLOUD
source "${BCS_DIR}/bcs.env"
# bcs config end for $ON_CLOUD
EOF
# shellcheck disable=SC1091
source "${BCS_DIR}/bcs.env"
}_add_sysctl () {# shellcheck disable=SC1091source /etc/os-releaseif [[ $VERSION_ID != "2.2" ]]; thenecho br_netfilter ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack | xargs -n1 modprobefiif [[ -f /etc/tlinux-release ]] && [[ $K8S_CNI == "ws_flannel" ]]; thenif lsmod | grep -q vlxan; thenrmmod vxlanelif lsmod | grep -q vlxan; thenerror "vxlan模块卸载失败"fimodprobe vxlan udp_port=4789log "Winodws flannel VXLAN using $(cat /sys/module/vxlan/parameters/udp_port)"fi[[ ${bcs_sysctl} == "1" ]] || return 0highlight "Add sysctl"TOTAL_MEM=$(free -b | awk 'NR==2{print $2}')TOTAL_MEM=${TOTAL_MEM:-$(( 16 * 1024 * 1024 *1024 ))}PAGE_SIZE=$(getconf PAGE_SIZE)PAGE_SIZE=${PAGE_SIZE:-4096}THREAD_SIZE=$(( PAGE_SIZE << 2 ))sed -ri.bcs.bak '/bcs config begin/,/bcs config end/d' /etc/sysctl.confcat >> "/etc/sysctl.conf" << EOF
# bcs config begin
# 系统中每一个端口最大的监听队列的长度,这是个全局的参数,默认值128太小,32768跟友商一致
net.core.somaxconn=32768
# 大量短连接时,开启TIME-WAIT端口复用
net.ipv4.tcp_tw_reuse=1
# TCP半连接队列长度。值太小的话容易造成高并发时客户端连接请求被拒绝
net.ipv4.tcp_max_syn_backlog=8096
# RPS是将内核网络rx方向报文处理的软中断分配到合适CPU核,以提升网络应用整体性能的技术。这个参数设置RPS flow table大小
fs.inotify.max_user_instances=8192
# inotify watch总数量限制。调大该参数避免"Too many open files"错误
fs.inotify.max_user_watches=524288
# 使用bpf需要开启
net.core.bpf_jit_enable=1
# 使用bpf需要开启
net.core.bpf_jit_harden=1
# 使用bpf需要开启
net.core.bpf_jit_kallsyms=1
# 用于调节rx软中断周期中内核可以从驱动队列获取的最大报文数,以每CPU为基础有效,计算公式(dev_weight * dev_weight_tx_bias)。主要用于调节网络栈和CPU在tx上的不对称
net.core.dev_weight_tx_bias=1
# socket receive buffer大小
net.core.rmem_max=16777216
# RPS是将内核网络rx方向报文处理的软中断分配到合适CPU核,以提升网络应用整体性能的技术。这个参数设置RPS flow table大小
net.core.rps_sock_flow_entries=8192
# socket send buffer大小
net.core.wmem_max=16777216
# 避免"neighbor table overflow"错误(发生过真实客户案例,触发场景为节点数量超过1024,并且某应用需要跟所有节点通信)
net.ipv4.neigh.default.gc_thresh1=2048
# 同上
net.ipv4.neigh.default.gc_thresh2=8192
# 同上
net.ipv4.neigh.default.gc_thresh3=16384
# orphan socket是应用以及close但TCP栈还没有释放的socket(不包含TIME_WAIT和CLOSE_WAIT)。 适当调大此参数避免负载高时报'Out of socket memory'错误。32768跟友商一致。
net.ipv4.tcp_max_orphans=32768
# 代理程序(如nginx)容易产生大量TIME_WAIT状态的socket。适当调大这个参数避免"TCP: time wait bucket table overflow"错误。
net.ipv4.tcp_max_tw_buckets=16384
# TCP socket receive buffer大小。 太小会造成TCP连接throughput降低
net.ipv4.tcp_rmem=4096 12582912 16777216
# TCP socket send buffer大小。 太小会造成TCP连接throughput降低
net.ipv4.tcp_wmem=4096 12582912 16777216
# 控制每个进程的内存地址空间中 virtual memory area的数量
vm.max_map_count=262144
# 为了支持k8s service, 必须开启
net.ipv4.ip_forward=1
# ubuntu系统上这个参数缺省为"/usr/share/apport/apport %p %s %c %P"。在容器中会造成无法生成core文件
kernel.core_pattern=core
# 内核在发生死锁或者死循环的时候可以触发panic,默认值是0.
kernel.softlockup_panic=0
# 使得iptable可以作用在网桥上
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
# 系统全局PID号数值的限制。
kernel.pid_max=$(( 4 * 1024 * 1024))
# 系统进程描述符总数量限制,根据内存大小动态计算得出,TOTAL_MEM为系统的内存总量,单位是字节,THREAD_SIZE默认为16,单位是kb。
kernel.threads-max=$((TOTAL_MEM / (8 * THREAD_SIZE) ))
# 整个系统fd(包括socket)的总数量限制。根据内存大小动态计算得出,TOTAL_MEM为系统的内存总量,单位是字节,调大该参数避免"Too many open files"错误。
fs.file-max=$(( TOTAL_MEM / 10240 ))
# bcs config end
EOFsysctl --system# ulimitcat > /etc/security/limits.d/99-bcs.conf << EOF
# bcs config begin
*   soft  nproc    1028546
*   hard  nproc    1028546
*   soft  nofile    204800
*   hard  nofile    204800
# bcs config end
EOF
}_add_hosts () {[[ ${MIRROR_IP} != "null" ]] || return 0highlight "Add hosts"sed -ri.bcs.bak '/bcs config begin for bcs/,/bcs config end for bcs/d' /etc/hostscat >> "/etc/hosts" << EOF
# bcs config begin for bcs
$( if [[ ${ON_CLOUD} == qcloud ]] && [[ -n ${MIRROR_IP} ]]; then echo "${MIRROR_IP} mirrors.tencentyun.com"fi
)
# bcs config end for bcs
EOF
}### 容器运行时: Dockerinstall_docker () {# 如果使用本地yum源则改成本地配置# local yum_repo# #   # if docker info &>/dev/null && [[ -d ${DOCKER_LIB} ]];then#    warning "Already installed, skipping"#     return 0# fi# ## yum install -y -q yum-utils# 
## 列出yum源中支持的docker版本## 指定Dokcker版本# # Install# yum -y install  docker-ce-19.03.9-3.el7 docker-ce-cli-19.03.9-3.el7 containerd.iolocal yum_repoyum_repo="${MIRROR_URL}/docker-ce/linux/centos/docker-ce.repo"if docker info &>/dev/null && [[ -d ${DOCKER_LIB} ]];thenwarning "Already installed, skipping"return 0fiif ! curl -Ifs "$yum_repo" > /dev/null; thenerror "Unable to curl repository file $yum_repo, is it valid?"ficurl -fs "$yum_repo" | sed "s#https://download.docker.com#${MIRROR_URL}/docker-ce#g" | tee "$BCS_DIR/docker-ce.repo"[[ ! -f /etc/tlinux-release ]] || sed -i "s/\$releasever/7/g" "$BCS_DIR/docker-ce.repo"yum install -y -q yum-utilsyum-config-manager --add-repo "$BCS_DIR/docker-ce.repo"yum makecache fast# 列出yum源中支持的docker版本## 指定Dokcker版本pkg_pattern="$(echo "${DOCKER_VERSION}" | sed "s/-ce-/\\\\.ce.*/g" | sed "s/-/.*/g").*el"pkg_version=$(yum list --showduplicates 'docker-ce' | grep "$pkg_pattern" | tail -1 | awk '{print $2}' | cut -d':' -f 2)[[ -n $pkg_version ]] || job_fail "ERROR: $DOCKER_VERSION not found amongst yum list results"cli_pkg_version=$(yum list --showduplicates 'docker-ce-cli' | grep "$pkg_pattern" | tail -1 | awk '{print $2}' | cut -d':' -f 2)# Installyum -y install docker-ce-cli-"$cli_pkg_version" docker-ce-"$pkg_version" containerd.io# Settingrender_docker# Enablesystemctl enable dockersystemctl restart docker# Testingdocker infoif ! docker --version; thenerror "Did Docker get installed?"fiif ! docker run --rm "$BK_PUBLIC_REPO"/library/hello-world:latest; thenerror "Could not get docker to run the hello world container"fi}render_docker () {# To-Do Docker配置调优# dockerd | Docker Documentation# https://docs.docker.com/engine/reference/commandline/dockerd/# Docker 调优 | Rancher文档# https://docs.rancher.cn/docs/rancher2/best-practices/2.0-2.4/optimize/docker/_index# daemon.json## 创建数据目录install -dv "${DOCKER_LIB}"## 创建配置文件目录install -dv /etc/docker/install -dv /etc/systemd/system/docker.service.d/if [[ -s /etc/docker/daemon.json ]] && [[ ! -f /etc/docker/daemon.json.bcs.bak ]]; thenwarning "/etc/docker/daemon.json已存在,备份中..."cp -av /etc/docker/daemon.json{,.bcs.bak} || job_fail "备份原配置文件失败"filog "开始写入配置docker文件..."# 如果使用本地docker仓库需要改成自己的仓库 比如: "insecure-registries": ["192.168.10.42:5000"]cat > /etc/docker/daemon.json << EOF
{"data-root": "${DOCKER_LIB}","exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ${REPO_MIRRORS},"max-concurrent-downloads": 10,"live-restore": ${DOCKER_LIVE_RESTORE},"log-level": "info","log-opts": {"max-size": "100m","max-file": "5"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"]
}
EOF## 因路由冲突,手动创建Docker网桥if [[ ${DOCKER_BRIDGE} != "null" ]]; thenip link add name docker0 type bridgeip addr add dev docker0 "${DOCKER_BRIDGE}"sed -i "/\"data-root\":/i    \"bip\": \"${DOCKER_BRIDGE}\","  /etc/docker/daemon.jsonfi## systemd servicecat>/etc/systemd/system/docker.service.d/bcs-docker.conf<<EOF
[Service]
ExecStartPost=/sbin/iptables -P FORWARD ACCEPT
EOFsystemctl daemon-reloadlog "Complete"
}clean_bcsenv () {if [[ -f "$BCS_DIR/bcs.env" ]]; thenif grep -q "bcs config begin" "$BCS_DIR/bcs.env" "$HOME/.bashrc"; thensed -ri.bcs.bak "/bcs config begin/,/bcs config end/d" "$BCS_DIR/bcs.env" "$HOME/.bashrc"fifilog "Complete"
}### Kubernetesinstall_k8stool () {local mirror_urlmaster_iplist=${BCS_K8S_CTRL_IP:-$LAN_IP}read -r -a master_iplist <<< "${master_iplist//,/ }"if [[ -z ${master_iplist[0]} ]]; thenerror "BCS_K8S_CTRL_IP is null"fihighlight "Add kube-apiserver hosts"sed -ri.bcs.bak '/bcs config begin for kube-apiserver/,/bcs config end for kube-apiserver/d' /etc/hostscat >> /etc/hosts << EOF
# bcs config begin for kube-apiserver
${master_iplist[0]} k8s-api.bcs.local
# bcs config end for kube-apiserver
EOF# Pre# 添加repo源mirror_url="${MIRROR_URL}/kubernetes"# 配置了本地docker仓库以下kbs仓库配置需要注释cat > "$BCS_DIR/kubernetes.repo" << EOF
[kubernetes]
name=Kubernetes
baseurl=${mirror_url}/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOFyum install -y -q yum-utils bash-completion# 配置了本地docker仓库以下kbs仓库配置需要注释yum-config-manager --add-repo  "$BCS_DIR/kubernetes.repo"yum clean allyum makecache fast## kubelet数据目录install -dv "${KUBELET_LIB}"#     cat > /etc/sysconfig/kubelet << EOF
# KUBELET_EXTRA_ARGS="--root-dir=${KUBELET_LIB}"
# EOF# 以下6行可以改成# yum -y install \#    "kubectl-1.20.11-0" \#    "kubelet-1.20.11-0" \#    "kubeadm-1.20.11-0" pkg_pattern="${K8S_VER}"pkg_version=$(yum list --showduplicates 'kubeadm' | grep -w "$pkg_pattern" | tail -1 | awk '{print $2}' | cut -d':' -f 2)yum -y install \"kubectl-${pkg_version}" \"kubeadm-${pkg_version}" \"kubelet-${pkg_version}"# kubeadm config images pull --config="$BCS_DIR/kubeadm-config" -v 11# kubeadm config images pull --image-repository="${BK_PUBLIC_REPO}/k8s.gcr.io" \#     -v 11 --kubernetes-version "${K8S_VER}" || error "pull kubernetes images failed"highlight "kubectl $(kubectl version --client --short || error "Did kubectl get installed?" )"highlight "kubeadm Version: $(kubeadm version -o short || error "Did kubectl get installed?" )"
}install_helm () {command -v docker &>/dev/null || error "docker client is not found"if ! helm version --short 2>/dev/null | grep -qoE '^v3\.'; thendocker run -v /usr/bin:/tmp --rm --entrypoint /bin/cp "${BK_PUBLIC_REPO}"/alpine/helm:3.7.2 -f /usr/bin/helm /tmp/ || error "pull helm image failed"fihighlight "helm Version: $(helm version --short)"
}get_joincmd () {install_k8sctrl
}install_k8sctrl () {local node_namelocal join_cmdlocal cert_keylocal master_join_cmdif ! kubectl cluster-info 2>/dev/null ; thensystemctl enable --now kubelet## etcd数据目录install -dv "${ETCD_LIB}"ln -sfv "${ETCD_LIB}" /var/lib/node_name="master-$(echo "$LAN_IP" | tr '.' '-')"highlight "Setup K8S Control Plane: $node_name"kubeadm init --config="$BCS_DIR/kubeadm-config" "$( [[ $K8S_VER =~ ^1.12 ]] && echo --ignore-preflight-errors=SystemVerification || echo --upload-certs)" || error "install k8s master failed"# kubeadm init --node-name "$node_name" --kubernetes-version "${K8S_VER}" \#             --control-plane-endpoint k8s-api.bcs.local \#             --image-repository="${BK_PUBLIC_REPO}/k8s.gcr.io" \#             --service-cidr="${K8S_SVC_CIDR}" --pod-network-cidr="${K8S_POD_CIDR}" --upload-certs || error "install k8s master failed" install -dv "$HOME/.kube"install -v -m 600 -o "$(id -u)" -g "$(id -g)" /etc/kubernetes/admin.conf "$HOME/.kube/config"# flannel CNI创建if [[ -n ${K8S_CNI} ]]; then"install_${K8S_CNI}"elseinstall_flannelfifiinstall_opjoin_cmd="$(kubeadm token create --print-join-command)"if [[ $K8S_VER =~ ^1.12 ]]; thenjoin_cmd="$join_cmd --ignore-preflight-errors=SystemVerification"kubectl set image deployment/coredns coredns="${BK_PUBLIC_REPO}/k8s.gcr.io/coredns:1.2.6" -n kube-system# kubectl get nodes -l kubernetes.io/os || kubectl label node -l node-role.kubernetes.io/master= kubernetes.io/os=linuxhighlight "Kubernetes控制节点启动成功"elsecert_key="$(kubeadm init phase upload-certs --upload-certs | grep -E '[a-z0-9]{64}')"[[ -n $cert_key ]] || error "not found certificate key"master_join_cmd="$join_cmd --control-plane --certificate-key $cert_key"[[ "$BCS_CP_WORKER" == "0" ]] || kubectl taint node -l node-role.kubernetes.io/master= node-role.kubernetes.io/master:NoSchedule-# echo "<SOPS_VAR>master_join_cmd:${master_join_cmd}</SOPS_VAR>"cluster_env=$( grep -vE "LAN_IP=|^#|^source |^export " "${BCS_DIR}"/bcs.env | base64 -w 0)master_join_cmd_b64=$(echo -n "${master_join_cmd}" | base64 -w 0)echo "<SOPS_VAR>master_join_cmd:${master_join_cmd}</SOPS_VAR>"echo "<SOPS_VAR>cluster_env:${cluster_env}</SOPS_VAR>"echo "<SOPS_VAR>master_join_cmd_b64:${master_join_cmd_b64}</SOPS_VAR>"ficluster_env=$( grep -vE "LAN_IP=|^#|^source |^export " "${BCS_DIR}"/bcs.env | base64 -w 0)join_cmd_b64=$(echo -n "${join_cmd}" | base64 -w 0)echo "<SOPS_VAR>join_cmd:${join_cmd}</SOPS_VAR>"echo "<SOPS_VAR>cluster_env:${cluster_env}</SOPS_VAR>"echo "<SOPS_VAR>join_cmd_b64:${join_cmd_b64}</SOPS_VAR>"cat <<EOF======================
$( highlight "Kubernetes控制节点启动成功" )
$( [[ $K8S_VER =~ ^1.12 ]] && exithighlight "扩容控制平面执行以下命令"echo "set -a"echo "cluster_env=${cluster_env}"echo "master_join_cmd_b64=${master_join_cmd_b64}"echo "set +a"echo "curl -fsSL https://bkopen-1252002024.file.myqcloud.com/ce7/bcs.sh | bash -s -- install k8s-control-plane"
)
$( highlight "扩容节点执行以下命令"echo "set -a"echo "cluster_env=${cluster_env}"echo "join_cmd_b64=${join_cmd_b64}"echo "set +a"echo "curl -fsSL https://bkopen-1252002024.file.myqcloud.com/ce7/bcs.sh | bash -s -- install k8s-node"
)
EOF
}install_k8s () {if [[ ${cluster_env} == "null" ]]; theninstall_k8s-1st-ctrlelseinstall_k8s-nodefi
}install_k8smaster () {install_k8s-1st-ctrl
}install_k8s-1st-ctrl () {install_bcsenvinstall_dockerinstall_k8stoolinstall_helminstall_k8sctrl
}clean_k8snode () {clean_k8s-node
}clean_k8s-node () {systemctl disable --now kubeletif [[ $K8S_VER =~ ^1.12 ]]; thenkubeadm reset phase cleanup-node -felsekubeadm reset phase cleanup-nodefibak_dir="/data/backup/$(date +%s)"install -dv "$bak_dir" || error "create backup dir $bak_dir failed"docker ps | grep -qv NAME && docker rm -f "$(docker ps -aq)"[[ -d /etc/kubernetes  ]] && mv -v /etc/kubernetes "$bak_dir"/[[ -d /var/lib/kubelet ]] && mv -v /var/lib/kubelet "$bak_dir"/[[ -d ${KUBELET_LIB}   ]] && mv -v "${KUBELET_LIB}" "$bak_dir"/kubeletsystemctl disable --now dockerlog "Uninstall docker, kubelet >>> Done"
}clean_k8smaster () {clean_k8s-control-plane
}clean_k8s-master () {clean_k8s-control-plane
}clean_k8s-control-plane () {if [[ $K8S_VER =~ ^1.12 ]]; thenkubeadm reset phase update-cluster-status -fkubeadm reset phase remove-etcd-member -felsekubeadm reset phase update-cluster-statuskubeadm reset phase remove-etcd-memberficlean_k8snode[[ -d "$HOME"/.kube    ]] && mv -v "$HOME"/.kube "$bak_dir"/[[ -d ${ETCD_LIB}      ]] && mv -v "${ETCD_LIB}" "$bak_dir"/[[ -L /var/lib/etcd    ]] && rm -vf /var/lib/etcd[[ -d /var/lib/etcd    ]] && mv -v /var/lib/etcd "$bak_dir"/log "Uninstall Kubernetes Control Plane >>> Done"
}install_k8snode (){install_k8s-node
}install_k8s-control-plane () {install_k8s-node
}install_k8s-node () {local join_cmdlocal node_name[[ ${cluster_env} != "null" ]] || error "cluster_env未指定 请运行完整的执行命令"install_bcsenvinstall_docker# 参数检查if [[ -z ${BCS_K8S_CTRL_IP} ]]; thenerror "Kubernetes控制平面IP未指定"elif [[ ${BCS_K8S_CTRL_IP} == "${LAN_IP}" ]]; thenerror "该节点为Kubernetes第一台控制平面,请至其它节点执行该命令"fiif [[ ${join_cmd_b64} != "null" ]] && [[ ${master_join_cmd_b64} == "null" ]]; thenjoin_cmd="$(echo -n "${join_cmd_b64}" | base64 -d)"echo -n "${join_cmd}" | grep -q "kubeadm join" || error "添加节点命令参数异常"node_name="node-$(echo "$LAN_IP" | tr '.' '-')"elif [[ ${master_join_cmd_b64} != "null" ]]; thenjoin_cmd="$(echo -n "${master_join_cmd_b64}" | base64 -d)"echo -n "${join_cmd}" | grep -q "kubeadm join" || error "master扩容命令参数异常"node_name="master-$(echo "$LAN_IP" | tr '.' '-')"elseerror "添加参数有误"fiinstall_localpv_dirif ! kubectl cluster-info 2>/dev/null && ! docker ps | grep -q pause; theninstall_k8stoolsystemctl enable --now kubelet## etcd数据目录install -dv "${ETCD_LIB}"ln -sfv "${ETCD_LIB}" /var/lib/cat "$BCS_DIR/kubeadm-config"highlight "$node_name: kubeadm join --config=$BCS_DIR/kubeadm-config -v 11"kubeadm join --config="$BCS_DIR/kubeadm-config" -v 11if [[ ${master_join_cmd_b64} != "null" ]]; theninstall -dv "$HOME/.kube"install -v -m 600 -o "$(id -u)" -g "$(id -g)" /etc/kubernetes/admin.conf "$HOME/.kube/config"log "Kubernetes Control Plane扩容成功"install_opelselog "添加Kubernetes节点成功"fifi
}## CNIinstall_flannel () {cat << EOF | sed "s#10.244.0.0/16#${K8S_POD_CIDR}#g" | kubectl apply -f -
---
kind: Namespace
apiVersion: v1
metadata:name: kube-flannellabels:pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-flannellabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"$([[ $K8S_CNI == "ws_flannel" ]] && echo ', "VNI" : 4096, "Port": 4789' )}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-flannellabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-plugin#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)# 配置了本地docker源的需要注意修改自己的路径镜像 image: ${BK_PUBLIC_REPO}/docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cni#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)image: ${BK_PUBLIC_REPO}/flannelcni/flannel:v0.19.2command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannel#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)image: ${BK_PUBLIC_REPO}/flannelcni/flannel:v0.19.2command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate
EOF
}## Windows
install_ws_flannel () {if [[ -f /etc/tlinux-release ]]; thenrmmod vxlanlsmod |grep -q vxlan && error "vxlan模块卸载失败"modprobe vxlan udp_port=4789log "Winodws flannel VXLAN UDP Port using $(cat /sys/module/vxlan/parameters/udp_port)"fiinstall_flannelinstall_ws_kubeproxyinstall_ws_flannel_overlay
}install_ws_kubeproxy () {cat << 'EOF' | sed "s/VERSION/v${K8S_VER}/g" | kubectl apply -f -
# https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml
apiVersion: v1
data:run-script.ps1: |-$ErrorActionPreference = "Stop";# Get newest cni conf file that is not 0-containerd-nat.json or spin until one shows up.# With Docker the kube-proxy pod should not be scheduled to Windows nodes until host networking is configured.# With contianerD host networking is required to schedule any pod including the CNI pods so a basic nat network is#  configured. This network should not be used by kube-proxy.function Get-NetConfFile {while ($true) {if (Test-Path /host/etc/cni/net.d/) {$files = @()$files += Get-ChildItem -Path /host/etc/cni/net.d/ -Exclude "0-containerd-nat.json"if ($files.Length -gt 0) {$file = (($files | Sort-Object LastWriteTime | Select-Object -Last 1).Name)Write-Host "Using CNI conf file: $file"return $file}}Write-Host "Waiting for CNI file..."Start-Sleep 10}}mkdir -force /host/var/lib/kube-proxy/var/run/secrets/kubernetes.io/serviceaccountmkdir -force /host/k/kube-proxycp -force /k/kube-proxy/* /host/k/kube-proxycp -force /var/lib/kube-proxy/* /host/var/lib/kube-proxycp -force /var/run/secrets/kubernetes.io/serviceaccount/* /host/var/lib/kube-proxy/var/run/secrets/kubernetes.io/serviceaccount #FIXME?# If live patching kube-proxy, make sure and patch it inside this container, so that the SHA# matches that of what is on the host. i.e. uncomment the below line...# wget <download-path-to-kube-proxy.exe> -outfile k/kube-proxy/kube-proxy.execp -force /k/kube-proxy/* /host/k/kube-proxy$cniConfFile = Get-NetConfFile$networkName = (Get-Content "/host/etc/cni/net.d/$cniConfFile" | ConvertFrom-Json).name$sourceVip = ($env:POD_IP -split "\.")[0..2] + 0 -join "."yq w -i /host/var/lib/kube-proxy/config.conf winkernel.sourceVip $sourceVipyq w -i /host/var/lib/kube-proxy/config.conf winkernel.networkName $networkNameyq w -i /host/var/lib/kube-proxy/config.conf featureGates.WinOverlay trueyq w -i /host/var/lib/kube-proxy/config.conf mode "kernelspace"# Start the kube-proxy as a wins process on the host.# Note that this will rename kube-proxy.exe to rancher-wins-kube-proxy.exe on the host!wins cli process run --path /k/kube-proxy/kube-proxy.exe --args "--v=6 --config=/var/lib/kube-proxy/config.conf --hostname-override=$env:NODE_NAME --feature-gates=WinOverlay=true"kind: ConfigMap
apiVersion: v1
metadata:labels:app: kube-proxyname: kube-proxy-windowsnamespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:labels:k8s-app: kube-proxyname: kube-proxy-windowsnamespace: kube-system
spec:selector:matchLabels:k8s-app: kube-proxy-windowstemplate:metadata:labels:k8s-app: kube-proxy-windowsspec:serviceAccountName: kube-proxycontainers:- command:- pwshargs:- -file- /var/lib/kube-proxy-windows/run-script.ps1env:- name: NODE_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: spec.nodeName- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIPimage: sigwindowstools/kube-proxy:VERSION-nanoservername: kube-proxyvolumeMounts:- name: hostmountPath: /host- name: winsmountPath: \\.\pipe\rancher_wins- mountPath: /var/lib/kube-proxyname: kube-proxy- mountPath: /var/lib/kube-proxy-windowsname: kube-proxy-windowsnodeSelector:kubernetes.io/os: windowstolerations:- key: CriticalAddonsOnlyoperator: Exists- operator: Existsvolumes:- configMap:defaultMode: 420name: kube-proxy-windowsname: kube-proxy-windows- configMap:name: kube-proxyname: kube-proxy- hostPath:path: /name: host- name: winshostPath:path: \\.\pipe\rancher_winstype: nullupdateStrategy:type: RollingUpdate
EOF
}clean_ws_kubeproxy () {kubectl delete -n kube-system daemonset.apps/kube-proxy-windowskubectl delete -n kube-system configmap/kube-proxy-windows
}install_ws_flannel_overlay () {cat << 'EOF' | kubectl apply -f -
# https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-windows-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel
data:run.ps1: |$ErrorActionPreference = "Stop";mkdir -force /host/etc/cni/net.dmkdir -force /host/etc/kube-flannelmkdir -force /host/opt/cni/binmkdir -force /host/k/flannelmkdir -force /host/k/flannel/var/run/secrets/kubernetes.io/serviceaccount$containerRuntime = "docker"if (Test-Path /host/etc/cni/net.d/0-containerd-nat.json) {$containerRuntime = "containerd"}Write-Host "Configuring CNI for $containerRuntime"$serviceSubnet = yq r /etc/kubeadm-config/ClusterConfiguration networking.serviceSubnet$podSubnet = yq r /etc/kubeadm-config/ClusterConfiguration networking.podSubnet$networkJson = wins cli net get | convertfrom-jsonif ($containerRuntime -eq "docker") {$cniJson = get-content /etc/kube-flannel-windows/cni-conf.json | ConvertFrom-Json$cniJson.delegate.policies[0].Value.ExceptionList = $serviceSubnet, $podSubnet$cniJson.delegate.policies[1].Value.DestinationPrefix = $serviceSubnetSet-Content -Path /host/etc/cni/net.d/10-flannel.conf ($cniJson | ConvertTo-Json -depth 100)} elseif ($containerRuntime -eq "containerd") {$cniJson = get-content /etc/kube-flannel-windows/cni-conf-containerd.json | ConvertFrom-Json$cniJson.delegate.AdditionalArgs[0].Value.Settings.Exceptions = $serviceSubnet, $podSubnet$cniJson.delegate.AdditionalArgs[1].Value.Settings.DestinationPrefix = $serviceSubnet$cniJson.delegate.AdditionalArgs[2].Value.Settings.ProviderAddress = $networkJson.AddressCIDR.Split('/')[0]Set-Content -Path /host/etc/cni/net.d/10-flannel.conf ($cniJson | ConvertTo-Json -depth 100)}cp -force /etc/kube-flannel/net-conf.json /host/etc/kube-flannelcp -force -recurse /cni/* /host/opt/cni/bincp -force /k/flannel/* /host/k/flannel/cp -force /kube-proxy/kubeconfig.conf /host/k/flannel/kubeconfig.ymlcp -force /var/run/secrets/kubernetes.io/serviceaccount/* /host/k/flannel/var/run/secrets/kubernetes.io/serviceaccount/wins cli process run --path /k/flannel/setup.exe --args "--mode=overlay --interface=Ethernet"wins cli route add --addresses 169.254.169.254wins cli process run --path /k/flannel/flanneld.exe --args "--kube-subnet-mgr --kubeconfig-file /k/flannel/kubeconfig.yml" --envs "POD_NAME=$env:POD_NAME POD_NAMESPACE=$env:POD_NAMESPACE"cni-conf.json: |{"name": "flannel.4096","cniVersion": "0.3.0","type": "flannel","capabilities": {"dns": true},"delegate": {"type": "win-overlay","policies": [{"Name": "EndpointPolicy","Value": {"Type": "OutBoundNAT","ExceptionList": []}},{"Name": "EndpointPolicy","Value": {"Type": "ROUTE","DestinationPrefix": "","NeedEncap": true}}]}}cni-conf-containerd.json: |{"name": "flannel.4096","cniVersion": "0.2.0","type": "flannel","capabilities": {"portMappings": true,"dns": true},"delegate": {"type": "sdnoverlay","AdditionalArgs": [{"Name": "EndpointPolicy","Value": {"Type": "OutBoundNAT","Settings" : {"Exceptions": []}}},{"Name": "EndpointPolicy","Value": {"Type": "SDNROUTE","Settings": {"DestinationPrefix": "","NeedEncap": true}}},{"Name":"EndpointPolicy","Value":{"Type":"ProviderAddress","Settings":{"ProviderAddress":""}}}]}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-ds-windows-amd64labels:tier: nodeapp: flannelnamespace: kube-system
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- windows- key: kubernetes.io/archoperator: Invalues:- amd64hostNetwork: trueserviceAccountName: flanneltolerations:- operator: Existseffect: NoSchedulecontainers:- name: kube-flannelimage: sigwindowstools/flannel:v0.13.0-nanoservercommand:- pwshargs:- -file- /etc/kube-flannel-windows/run.ps1volumeMounts:- name: winsmountPath: \\.\pipe\rancher_wins- name: hostmountPath: /host- name: kube-proxymountPath: /kube-proxy- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/- name: flannel-windows-cfgmountPath: /etc/kube-flannel-windows/- name: kubeadm-configmountPath: /etc/kubeadm-config/env:- name: POD_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespacevolumes:- name: opthostPath:path: /opt- name: hosthostPath:path: /- name: cnihostPath:path: /etc- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: flannel-windows-cfgconfigMap:name: kube-flannel-windows-cfg- name: kube-proxyconfigMap:name: kube-proxy- name: kubeadm-configconfigMap:name: kubeadm-config- name: winshostPath:path: \\.\pipe\rancher_winstype: nullEOF
}clean_ws_flannel_overlay () {kubectl delete -n kube-system daemonset.apps/kube-flannel-ds-windows-amd64kubectl delete -n kube-system configmap/kube-flannel-windows-cfg
}### BCS_init_bk_ns () {kubectl create ns bk-systemkubectl patch ns bk-system --type=json -p='[{"op": "add", "path": "/metadata/labels", "value": {"bcs-webhook": "false"}}]'
}### Kubernetes生态工具# k8s >= 1.18
install_ingress-nginx () {local NAMESPACE="bk-system"kubectl get ns "$NAMESPACE" || _init_bk_ns# 如果配置本地helm仓库,需要改动# helm repo add localrepo http://192.168.10.42/helmChart# helm repo updatehelm repo add mirrors https://hub.bktencent.com/chartrepo/mirrorshelm repo updatecat << EOF | helm upgrade --install ingress-nginx mirrors/ingress-nginx -n $NAMESPACE --version 3.36.0 --debug -f - || error "helm upgrade failed"
controller:metrics:enabled: trueimage:#  如果配置了本地镜像仓库,需要修改自己的路径,比如: registry: ${BK_PUBLIC_REPO}registry: ${BK_PUBLIC_REPO}/k8s.gcr.iotag: "v0.49.0"digest: ""config:# nginx 与 client 保持的一个长连接能处理的请求数量,默认 100,高并发场景建议调高。# 参考: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#keep-alive-requestskeep-alive-requests: "10000"# nginx 与 upstream 保持长连接的最大空闲连接数 (不是最大连接数),默认 32,在高并发下场景下调大,避免频繁建连导致 TIME_WAIT 飙升。# 参考: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#upstream-keepalive-connectionsupstream-keepalive-connections: "200"# 每个 worker 进程可以打开的最大连接数,默认 16384。# 参考: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#max-worker-connectionsmax-worker-connections: "65536"# 上传文件需要proxy-body-size: "2G"proxy-read-timeout: "600"service:type: NodePortnodePorts:http: 32080https: 32443ingressClassResource:enabled: truedefault: trueadmissionWebhooks:patch:image:registry: ${BK_PUBLIC_REPO}tag: "v1.5.1"digest: ""
EOFkubectl -n $NAMESPACE wait --for=condition=available --timeout=600s deployment --allkubectl -n $NAMESPACE get deployments --output name | xargs -I{} kubectl -n $NAMESPACE rollout status --timeout=600s {}
}install_kubeapps () {#  配置本地的helm的需要自己改成#  kubectl create namespace kubeapps#  helm install kubeapps --namespace kubeapps localrepo/kubeappshelm repo add bitnami https://charts.bitnami.com/bitnamikubectl create namespace kubeappshelm install kubeapps --namespace kubeapps bitnami/kubeapps
}clean_kubeapps () {helm uninstall kubeapps --namespace kubeapps 
}install_localpv_dir () {install -dv /mnt/blueking/vol{01..20} "${BK_HOME}/localpv"/vol{01..20} || error "create dir failed"for i in {01..20}; dosrc_dir="${BK_HOME}/localpv/vol$i"dst_dir="/mnt/blueking/vol$i"if grep -w "$src_dir" /etc/fstab; thenwarning "WARN: /etc/fstab [$src_dir] already exists"elseecho "$src_dir $dst_dir none defaults,bind 0 0" | tee -a /etc/fstab || error "add /etc/fstab failed"fidone# 挂载mount -av || error "mount local pv dir failed"
}install_localpv () {local NAMESPACE="bk-system"kubectl get ns "$NAMESPACE" || _init_bk_ns# 配置了本地helm的需要注释以下两行helm repo add mirrors https://hub.bktencent.com/chartrepo/mirrorshelm repo updatecat << EOF | helm upgrade --install provisioner mirrors/provisioner -n $NAMESPACE --version 2.4.0 --debug -f - || error "helm upgrade failed"
daemonset:image: ${BK_PUBLIC_REPO}/k8s.gcr.io/sig-storage/local-volume-provisioner:v2.4.0
classes:
- name: local-storagehostDir: /mnt/bluekingvolumeMode: FilesystemstorageClass: # create and set storage class as defaultisDefaultClass: truereclaimPolicy: Delete
EOFkubectl -n $NAMESPACE get daemonset --output name | xargs -I{} kubectl -n $NAMESPACE rollout status --timeout=600s {}
}install_metrics-server () {cat << EOF | kubectl apply -f - || error "install metrics-server failed"
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader
rules:
- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: metrics-servername: system:metrics-server
rules:
- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: metrics-servername: system:metrics-server
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server
subjects:
- kind: ServiceAccountname: metrics-servernamespace: kube-system
---
apiVersion: v1
kind: Service
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system
spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15s- --kubelet-insecure-tls=trueimage: ${BK_PUBLIC_REPO}/k8s.gcr.io/metrics-server/metrics-server:v0.5.2imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io
spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100
EOFkubectl -n kube-system rollout status --timeout=600s deployment/metrics-server
}clean_metrics-server () {kubectl -n kube-system delete apiservice,deployment,service,clusterrolebinding,rolebinding,clusterrole,serviceaccount -l k8s-app=metrics-server || error "uninstall metrics-server failed"
}## 脚本框架check_args () {if [[ -n $1 ]]; thenreturn 0elseerror "缺少参数值"usage_and_exit 1fi
}check_func () {local OP_TYPE="$1"local PROJECT="$2"if [[ -n ${OP_TYPE} ]] && [[ -n ${PROJECT} ]]; thentype "${OP_TYPE}_${PROJECT}" &>/dev/null || error "${OP_TYPE} [$PROJECT] NOT SUPPORT"elsereturn 0fi
}# 解析命令行参数,长短混合模式
(( $# == 0 )) && usage_and_exit 1
while (( $# > 0 )); do case "$1" in--install |  -i  | install )shiftPROJECT="$1"OP_TYPE="install";;--get | get )shiftPROJECT="$1"OP_TYPE="get";;     --clean | -c | clean )shiftPROJECT="$1"OP_TYPE="clean";;--render | -r | render )shiftPROJECT="$1"OP_TYPE="render";;--help | -h | '-?' | help )usage_and_exit 0;;--version | -v | -V | version )version exit 0;;-*)error "不可识别的参数: $1";;*) break;;esacshift 
done check_func "${OP_TYPE}" "${PROJECT}"
[[ ${PROJECT} == "bcsenv" ]] || _on_cloudcase "${OP_TYPE}" ininstall)highlight "INSTALL: ${PROJECT}""install_${PROJECT}"ok_bcs;;get)highlight "Get: ${PROJECT}""get_${PROJECT}"ok_bcs;;clean)highlight "CLEAN: ${PROJECT}""clean_${PROJECT}"bye_bcs;;render)highlight "RENDER CONFIG TEMPLATE: ${PROJECT}""render_${PROJECT}"ok_bcs;;-*)error "不可识别的参数: $1";;*) usage_and_exit 0
esac

 给文件设置权限

chmod 777 bcs.sh 

在第一台master 机器执行命令

./bcs.sh  -i k8s

执行成功获得以下信息 

 执行成功后,复制以下命令

  Kubernetes控制节点启动成功  扩容控制平面执行以下命令  
cluster_env=T05fQ0xPVUQ9ImJhcmUtbWV0YWwiCkJDU19ESVI9Ii9yb290Ly5iY3MiCkJLX0hPTUU9Ii9kYXRhL2JjcyIKYmNzX3N5c2N0bD0iMSIKTUlSUk9SX0lQPSJudWxsIgpNSVJST1JfVVJMPSJodHRwczovL21pcnJvcnMudGVuY2VudC5jb20iCkJLUkVQT19VUkw9Im51bGwiCkRPQ0tFUl9MSUI9Ii9kYXRhL2Jjcy9saWIvZG9ja2VyIgpET0NLRVJfVkVSU0lPTj0iMTkuMDMuOSIKRE9DS0VSX0xJVkVfUkVTVE9SRT0iZmFsc2UiClJFUE9fTUlSUk9SUz0nWyJodHRwczovL21pcnJvci5jY3MudGVuY2VudHl1bi5jb20iXScKRE9DS0VSX0JSSURHRT0ibnVsbCIKQktfUFVCTElDX1JFUE89Imh1Yi5ia3RlbmNlbnQuY29tIgpCS19SRUxFQVNFX1JFUE89Imh1Yi5ia3RlbmNlbnQuY29tL2JsdWVraW5nIgpLVUJFTEVUX0xJQj0iL2RhdGEvYmNzL2xpYi9rdWJlbGV0IgpLOFNfVkVSPSIxLjIwLjExIgpLOFNfU1ZDX0NJRFI9IjEwLjk2LjAuMC8xMiIKSzhTX1BPRF9DSURSPSIxMC4yNDQuMC4wLzE2IgpLOFNfRVhUUkFfQVJHUz0iYWxsb3dlZC11bnNhZmUtc3lzY3RsczogbmV0LmlwdjQudGNwX3R3X3JldXNlIgpFVENEX0xJQj0iL2RhdGEvYmNzL2xpYi9ldGNkIgpCQ1NfSzhTX0NUUkxfSVA9IjE5Mi4xNjguNDYuMTI4Igo=
master_join_cmd_b64=a3ViZWFkbSBqb2luIGs4cy1hcGkuYmNzLmxvY2FsOjY0NDMgLS10b2tlbiA5dXgzYWkuZW00ZmlxMGV4cWh2N2l6YiAgICAgLS1kaXNjb3ZlcnktdG9rZW4tY2EtY2VydC1oYXNoIHNoYTI1Njo0MjIzNWUyMzFmYzk3YzJiNWNkYTNiOWUxYTE2OTcxYzYyYzkwOGFmOTdhZTExOWE3ZDNjNzdhZDhlNTdjMjk1ICAtLWNvbnRyb2wtcGxhbmUgLS1jZXJ0aWZpY2F0ZS1rZXkgZjk3NjM3MDY0OGMwYzA5ZmQ0ZDk0OWMwYjEyOGNiYTdmNzMwOTVhN2YxZTJmM2QwNGM3MWYxNDIwNjdlMjQxMA==
set +a
curl -fsSL https://bkopen-1252002024.file.myqcloud.com/ce7/bcs.sh | bash -s -- install k8s-control-plane扩容节点执行以下命令  
set -a
cluster_env=T05fQ0xPVUQ9ImJhcmUtbWV0YWwiCkJDU19ESVI9Ii9yb290Ly5iY3MiCkJLX0hPTUU9Ii9kYXRhL2JjcyIKYmNzX3N5c2N0bD0iMSIKTUlSUk9SX0lQPSJudWxsIgpNSVJST1JfVVJMPSJodHRwczovL21pcnJvcnMudGVuY2VudC5jb20iCkJLUkVQT19VUkw9Im51bGwiCkRPQ0tFUl9MSUI9Ii9kYXRhL2Jjcy9saWIvZG9ja2VyIgpET0NLRVJfVkVSU0lPTj0iMTkuMDMuOSIKRE9DS0VSX0xJVkVfUkVTVE9SRT0iZmFsc2UiClJFUE9fTUlSUk9SUz0nWyJodHRwczovL21pcnJvci5jY3MudGVuY2VudHl1bi5jb20iXScKRE9DS0VSX0JSSURHRT0ibnVsbCIKQktfUFVCTElDX1JFUE89Imh1Yi5ia3RlbmNlbnQuY29tIgpCS19SRUxFQVNFX1JFUE89Imh1Yi5ia3RlbmNlbnQuY29tL2JsdWVraW5nIgpLVUJFTEVUX0xJQj0iL2RhdGEvYmNzL2xpYi9rdWJlbGV0IgpLOFNfVkVSPSIxLjIwLjExIgpLOFNfU1ZDX0NJRFI9IjEwLjk2LjAuMC8xMiIKSzhTX1BPRF9DSURSPSIxMC4yNDQuMC4wLzE2IgpLOFNfRVhUUkFfQVJHUz0iYWxsb3dlZC11bnNhZmUtc3lzY3RsczogbmV0LmlwdjQudGNwX3R3X3JldXNlIgpFVENEX0xJQj0iL2RhdGEvYmNzL2xpYi9ldGNkIgpCQ1NfSzhTX0NUUkxfSVA9IjE5Mi4xNjguNDYuMTI4Igo=
join_cmd_b64=a3ViZWFkbSBqb2luIGs4cy1hcGkuYmNzLmxvY2FsOjY0NDMgLS10b2tlbiA5dXgzYWkuZW00ZmlxMGV4cWh2N2l6YiAgICAgLS1kaXNjb3ZlcnktdG9rZW4tY2EtY2VydC1oYXNoIHNoYTI1Njo0MjIzNWUyMzFmYzk3YzJiNWNkYTNiOWUxYTE2OTcxYzYyYzkwOGFmOTdhZTExOWE3ZDNjNzdhZDhlNTdjMjk1IA==
set +a
curl -fsSL https://bkopen-1252002024.file.myqcloud.com/ce7/bcs.sh | bash -s -- install k8s-node

扩容master

将bcs.sh上传到扩容的maser节点后 ,在另外要扩容的master节点执行:

  Kubernetes控制节点启动成功  扩容控制平面执行以下命令  
cluster_env=T05fQ0xPVUQ9ImJhcmUtbWV0YWwiCkJDU19ESVI9Ii9yb290Ly5iY3MiCkJLX0hPTUU9Ii9kYXRhL2JjcyIKYmNzX3N5c2N0bD0iMSIKTUlSUk9SX0lQPSJudWxsIgpNSVJST1JfVVJMPSJodHRwczovL21pcnJvcnMudGVuY2VudC5jb20iCkJLUkVQT19VUkw9Im51bGwiCkRPQ0tFUl9MSUI9Ii9kYXRhL2Jjcy9saWIvZG9ja2VyIgpET0NLRVJfVkVSU0lPTj0iMTkuMDMuOSIKRE9DS0VSX0xJVkVfUkVTVE9SRT0iZmFsc2UiClJFUE9fTUlSUk9SUz0nWyJodHRwczovL21pcnJvci5jY3MudGVuY2VudHl1bi5jb20iXScKRE9DS0VSX0JSSURHRT0ibnVsbCIKQktfUFVCTElDX1JFUE89Imh1Yi5ia3RlbmNlbnQuY29tIgpCS19SRUxFQVNFX1JFUE89Imh1Yi5ia3RlbmNlbnQuY29tL2JsdWVraW5nIgpLVUJFTEVUX0xJQj0iL2RhdGEvYmNzL2xpYi9rdWJlbGV0IgpLOFNfVkVSPSIxLjIwLjExIgpLOFNfU1ZDX0NJRFI9IjEwLjk2LjAuMC8xMiIKSzhTX1BPRF9DSURSPSIxMC4yNDQuMC4wLzE2IgpLOFNfRVhUUkFfQVJHUz0iYWxsb3dlZC11bnNhZmUtc3lzY3RsczogbmV0LmlwdjQudGNwX3R3X3JldXNlIgpFVENEX0xJQj0iL2RhdGEvYmNzL2xpYi9ldGNkIgpCQ1NfSzhTX0NUUkxfSVA9IjE5Mi4xNjguNDYuMTI4Igo=
master_join_cmd_b64=a3ViZWFkbSBqb2luIGs4cy1hcGkuYmNzLmxvY2FsOjY0NDMgLS10b2tlbiA5dXgzYWkuZW00ZmlxMGV4cWh2N2l6YiAgICAgLS1kaXNjb3ZlcnktdG9rZW4tY2EtY2VydC1oYXNoIHNoYTI1Njo0MjIzNWUyMzFmYzk3YzJiNWNkYTNiOWUxYTE2OTcxYzYyYzkwOGFmOTdhZTExOWE3ZDNjNzdhZDhlNTdjMjk1ICAtLWNvbnRyb2wtcGxhbmUgLS1jZXJ0aWZpY2F0ZS1rZXkgZjk3NjM3MDY0OGMwYzA5ZmQ0ZDk0OWMwYjEyOGNiYTdmNzMwOTVhN2YxZTJmM2QwNGM3MWYxNDIwNjdlMjQxMA==
set +a

再执行

curl -fsSL https://bkopen-1252002024.file.myqcloud.com/ce7/bcs.sh | bash -s -- install k8s-control-plane
或者
./bcs.sh -i k8s-control-plane

扩容node节点

将bcs.sh上传到扩容的node节点后,在要扩容的node节点执行:

 扩容节点执行以下命令  
set -a
cluster_env=T05fQ0xPVUQ9ImJhcmUtbWV0YWwiCkJDU19ESVI9Ii9yb290Ly5iY3MiCkJLX0hPTUU9Ii9kYXRhL2JjcyIKYmNzX3N5c2N0bD0iMSIKTUlSUk9SX0lQPSJudWxsIgpNSVJST1JfVVJMPSJodHRwczovL21pcnJvcnMudGVuY2VudC5jb20iCkJLUkVQT19VUkw9Im51bGwiCkRPQ0tFUl9MSUI9Ii9kYXRhL2Jjcy9saWIvZG9ja2VyIgpET0NLRVJfVkVSU0lPTj0iMTkuMDMuOSIKRE9DS0VSX0xJVkVfUkVTVE9SRT0iZmFsc2UiClJFUE9fTUlSUk9SUz0nWyJodHRwczovL21pcnJvci5jY3MudGVuY2VudHl1bi5jb20iXScKRE9DS0VSX0JSSURHRT0ibnVsbCIKQktfUFVCTElDX1JFUE89Imh1Yi5ia3RlbmNlbnQuY29tIgpCS19SRUxFQVNFX1JFUE89Imh1Yi5ia3RlbmNlbnQuY29tL2JsdWVraW5nIgpLVUJFTEVUX0xJQj0iL2RhdGEvYmNzL2xpYi9rdWJlbGV0IgpLOFNfVkVSPSIxLjIwLjExIgpLOFNfU1ZDX0NJRFI9IjEwLjk2LjAuMC8xMiIKSzhTX1BPRF9DSURSPSIxMC4yNDQuMC4wLzE2IgpLOFNfRVhUUkFfQVJHUz0iYWxsb3dlZC11bnNhZmUtc3lzY3RsczogbmV0LmlwdjQudGNwX3R3X3JldXNlIgpFVENEX0xJQj0iL2RhdGEvYmNzL2xpYi9ldGNkIgpCQ1NfSzhTX0NUUkxfSVA9IjE5Mi4xNjguNDYuMTI4Igo=
join_cmd_b64=a3ViZWFkbSBqb2luIGs4cy1hcGkuYmNzLmxvY2FsOjY0NDMgLS10b2tlbiA5dXgzYWkuZW00ZmlxMGV4cWh2N2l6YiAgICAgLS1kaXNjb3ZlcnktdG9rZW4tY2EtY2VydC1oYXNoIHNoYTI1Njo0MjIzNWUyMzFmYzk3YzJiNWNkYTNiOWUxYTE2OTcxYzYyYzkwOGFmOTdhZTExOWE3ZDNjNzdhZDhlNTdjMjk1IA==
set +a

在执行

curl -fsSL https://bkopen-1252002024.file.myqcloud.com/ce7/bcs.sh | bash -s -- install k8s-node
或者
./bcs.sh  -i k8s-node

查看集群

 使用命令

kubectl get node
或者
kubectl get node -o wide

会出现的问题

出先The connection to the server localhost:8080 was refused - did you specify the right host or port?的错误信息。

#从主master节点复制admin.conf到有问题的机器

scp /etc/kubernetes/admin.conf 192.168.10.42:/etc/kubernetes/admin.conf

输入密码

传过去后,在有问题机器输入

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/100852.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

centos密码过期导致navicat无法通过SSH登录阿里云RDS问题

具体错误提示&#xff1a;2013 - Lost connection to server at "hand hake: reading initial communication packet, system error: 0 解决办法&#xff1a;更新SSH服务器密码

docker 跨平台构建镜像

我们在开发环境构建的镜像在生产环境大多不可用&#xff0c;我们在开发中一般使用 Windows 或者 MAC 系统&#xff0c;部署多半是 linux 环境。那么这篇文章能帮到你。 文章目录 首先构建环境进阶 首先 首先你需要有一个 Dockerfile 文件。 举例&#xff1a;这里以一个 pytho…

MySQL——数据库以及数据表的创建

创建数据库 回到刚才创建数据库的问题&#xff0c;我们在创建数据库的时候可以通过添加一个参数&#xff0c;这个参数的意义在于当我们创建的数据库已经存在的时候则不会创建&#xff0c;也不会报错&#xff0c;如果不使用这个参数&#xff0c;则我们在重复创建一个已经存在的…

k8s集群中集群方式安装nacos

1、前提条件 一个k8s集群&#xff0c;其中有三个master 节点&#xff0c;这三个节点的标签名称为etcd 三个master节点的ip 分别为&#xff1a;192.165.187.170 、192.165.187.171、192.165.187.172一个mysql 数据库&#xff0c; 数据库的ip 为&#xff1a;192.165.187.180 用户…

python Playwright优化页面等待和处理异步操作

在使用 Playwright 进行页面自动化时&#xff0c;优化页面等待和处理异步操作是非常重要的&#xff0c;可以提高脚本的稳定性和执行效率。 优化页面等待和处理异步操作的建议 **1. 使用正确的等待条件&#xff1a;**Playwright 提供了多种等待条件&#xff0c;如等待元素出现…

grep 的非贪婪模式

实测grep的非贪婪模式是-P参数&#xff0c;加上匹配字符串.*带?&#xff08;.*?&#xff09;&#xff1a; #贪婪模式&#xff08;默认&#xff09; grep "Product.*“LME:AA ei6_gateway.log --color #非贪婪模式 grep -P "Product.*?“LME:AA ei6_gateway.log -…

机器学习笔记之最优化理论与方法(八)无约束优化问题——常用求解方法(中)

机器学习笔记之最优化理论与方法——基于无约束优化问题的常用求解方法[中] 引言回顾&#xff1a;最速下降算法的缺陷经典牛顿法基本介绍经典牛顿法的问题经典牛顿法的优点与缺陷经典牛顿法示例 修正牛顿法介绍拟牛顿法拟牛顿法的算法过程 矩阵 B k 1 \mathcal B_{k1} Bk1​的…

亚马逊鲲鹏AI智能养号好用吗?怎么使用的?

亚马逊鲲鹏AI智能一键养号可以根据AI功能页面的姓名、年龄、职业、爱好等生成一批不同的AI角色&#xff0c;账号绑定这些角色后就可以自动浏览进行养号了。 功能特点 1、自动生成AI姓名、随机选择角色性别、自由设置AI年龄 2、根据勾选的AI职业、AI爱好进行随机生成AI关键词进…

PY32F003F18端口复用功能映射

PY32F003F18端口复用功能映射&#xff0c;GPIO引脚可配置为"输入&#xff0c;输出,模拟或复用功能。 一、端口A复用功能映射 端口A复用功能映射表里&#xff0c;每个引脚都有AF0~AF15&#xff0c;修改AF0~AF15的值&#xff0c;就可以将对应复用用能引脚映射到CPU引脚上。…

最新Win10离线安装.NET Framework 3.5的方法(附离线包2022/3/22)

win10系统安装软件时&#xff0c;可能需要.net framework3.5的运行环境&#xff0c;当我们安装某些软件的时候会提示“你的电脑上的应用需要使用以下Windows功能:.NET Framework 3.5(包括.NET 2.0和3.0)。如果系统默认的是4.0以上的版本&#xff0c;当软件需要.net framework3.…

记录Echarts双Y轴左右刻度不一致问题

根据项目需要,echarts实现双Y轴图表。因为两边数据不同&#xff0c;所以会出现左右Y轴刻度不一致。 增长率暂时是0&#xff0c;在X轴上&#xff0c;懒得改了。 修改之前: 修改之后: 根据数据找出最大值&#xff0c;分为6份向上取整&#xff0c;得到的每份就是间隔。 // dataAr…

摘要-签名-PKI-访问控制-DOS-欺骗技术

摘要-签名-PKI-访问控制-DOS-欺骗技术 信息摘要数字签名PKI访问控制抗攻击技术欺骗技术ARP欺骗DNS欺骗IP欺骗 信息摘要 信息摘要通过哈希函数生成的 信息摘要保证数据的完整性 MD5 和 SHA-1 数字签名 唯一确定发送方 基于非对称加密技术&#xff08;公钥和私钥技术&#xff09;…