k8s kubeadmin方式安装部署

1、节点至少2C2G.

2、首先安装docker,

sudo yum install -y docker-ce docker-ce-cli containerd.io#以下是在安装k8s使用的docker版本。注意保持一致
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2"
}
EOFsudo systemctl daemon-reload
sudo systemctl restart dockersystemctl daemon-reloadsystemctl restart dockersystemctl enable docker ​

 #各个机器设置自己的主机名
hostnamectl set-hostname k8s1

hostnamectl set-hostname k8s2

hostnamectl set-hostname k8s3

#各个机器分别执行如下内容

systemctl stop firewalld
systemctl disable firewalld#可选systemctl stop iptables
systemctl disable iptables# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config#关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab#允许 iptables 检查桥接流量
#下面还可以加个net.ipv4.ip_forward=1
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

 #各个机器分别执行下面内容

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOFsudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetessudo systemctl enable --now kubelet

 #各个机器分别执行下面内容,下载需要的镜像

sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOFchmod +x ./images.sh && ./images.sh

#所有机器添加master域名映射,以下需要修改为自己的
echo "192.168.179.51  cluster-endpoint" >> /etc/hosts

初始化主节点(下面只在主节点执行)

#主节点初始化
kubeadm init \
--apiserver-advertise-address=192.168.179.51 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.169.0.0/16

运行结束后会生成如下内容


Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \--discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

 其中生成的 kubeadm join字段,第一个是能把机器变成主节点的命令,第二个是能把机器变成从节点的命令。

先运行下面部分

 mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

可以敲如下命令查看当前的k8s集群(只初始化了主节点)

#查看集群所有节点
kubectl get nodes#根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml#查看集群部署了哪些应用?
docker ps   ===   kubectl get pods -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A
[root@k8s1 ~]# curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O% Total    % Received % Xferd  Average Speed   Time    Time     Time  CurrentDload  Upload   Total   Spent    Left  Speed
100  198k  100  198k    0     0   255k      0 --:--:-- --:--:-- --:--:--  254k
[root@k8s1 ~]# cat calico.yaml | grep 192.168#   value: "192.168.0.0/16"

vim calico.yaml

这两个字段取消注释,改成我们初始化主节点里的“--pod-network-cidr=” 的字段值。注意对齐!!!。这里改成192.169.0.0/16

kubectl apply -f calico.yaml

#打印加入节点的命令

kubeadm token create --print-join-command --ttl 0

在每个work节点上运行加入命令

kubectl get nodes

kubectl get pods -A

等待

node状态正常为Ready

pod状态为Running

如下完成

到这里k8s就装好了,很多公司都是直接用这种原生的。但是直接看不太容易理解集群工作原理怎么办?上图形化界面。如下

部署dashorad

vim recommended.yaml  然后后输入:paste 敲回车,进入了vim的粘贴模式

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.3.1imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.6ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

 kubectl apply -f recommended.yaml

kubectl get pods -A

 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

修改为Nodeport登陆方式
输入:type 回车

下图可以看到dashborad暴露的端口是30441

 浏览器的问题,直接在此页面键盘输入thisisunsafe

到下面界面需要填一个令牌

 准备一个yaml文件

vim dash-user.yaml

#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard

kubectl apply -f dash-user.yaml
 

获取登陆token

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

复制 

 粘贴

 如果有401的报错,重新复制一下token,粘上来

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/19305.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【技巧】Latex在线工具:公式编辑器、表格编辑器

找到什么再补充 目录 表格编辑器 公式编辑器 表格编辑器 https://www.tablesgenerator.com/ 公式编辑器 https://www.latexlive.com/

为Kubernetes(k8s)集群安装仪表盘(Dashboard)

为Kubernetes集群安装仪表盘 文档说明部署仪表盘(Dashboard UI)访问 Dashboard 用户界面 文档说明 对应本片文章的视频教程地址&#xff1a;https://www.bilibili.com/video/BV1MF41197RS/?vd_source98deeeab6739fa30792cfcffa994b50e 在之前的文章当中我们搭建了一个kubern…

VMware虚拟机里的Ubuntu通过主机的代理联网

问题描述&#xff1a;主机win10&#xff0c;通过代理联网。主机里装有VMware的虚拟机Ubuntu&#xff0c;想要通过主机的代理进行上网。 步骤&#xff1a; 1 将虚拟机的网络设置为NAT模式。 2 在win10命令行中输入ipconfig&#xff0c;查询ipv4的局域网地址。&#xff08;注&…

C语言—模拟实现memcpy,memmove

1.memcpy函数的介绍与实现 函数memcpy从source的位置开始向后复制num个字节的数据到destination的内存位置。 这个函数在遇到 \0 的时候并不会停下来。 如果source和destination有任何的重叠&#xff0c;复制的结果都是未定义的。 void * memcpy ( void * destination, const v…

mysql,对表的简单操作

一.创建表并插入数据 mysql> create table worker(-> department_id int(11) not null comment 部门号,-> worker_id int(11) primary key not null comment 职工号,-> worker_date date not null comment 工作时间,-> wages float(8,2) not null comment 工资…

【AGC】性能管理服务报502问题

【关键字】 AGC、性能管理、502 【问题描述】 有开发者反馈在使用AGC性能管理服务报502错误&#xff0c;接入了analytic sdk 是正常的上报分析数据&#xff0c;但尝试接入apms时出现无法上报情况。 日志截图如下&#xff0c;可以看到分析服务是正常的200上报了数据&#xff…

【微信小程序创作之路】- 小程序常用页面样式

【微信小程序创作之路】- 小程序常用页面样式 第四章 微信小程序用页面样式 文章目录 【微信小程序创作之路】- 小程序常用页面样式前言一、总体样式--全局样式and局部样式1.全局样式2.局部样式 二、Flex布局&#x1f349;&#x1f349;&#x1f349;rpx单位 三、样式导入四、…

Spring Security OAuth2.0(2):基于session的认证方式

文章目录 认证流程创建工程1 创建maven工程2. 实现认证功能3. 会话功能4. 授权功能 认证流程 \qquad 基于session的认证方式如下 \qquad 它的交互流程是&#xff0c;用户认证成功后&#xff0c;在服务端生成用户相关的数据保存在session(当前会话)中&#xff0c;发给客户端的s…

HuggingGPT解析:使用 ChatGPT及HuggingFace上的族系解决AI问题

HuggingGPT解析&#xff1a;使用 ChatGPT及HuggingFace上的族系解决AI问题 HuggingGPT是一个利用大型语言模型(LLMs)来解决复杂AI任务的框架。其基本理念是&#xff0c;考虑到LLMs&#xff08;例如ChatGPT&#xff09;在语言理解、生成、交互和推理方面展现出了卓越的能力&…

Spring Boot 中的 @Scheduled 注解是什么,如何使用

Spring Boot 中的 Scheduled 注解是什么&#xff0c;如何使用 引言 Spring Boot 是一个非常流行的 Java Web 开发框架&#xff0c;它提供了很多方便的功能&#xff0c;其中就包括了 Scheduled 注解。本文将会介绍 Scheduled 注解的基本用法&#xff0c;并附上代码示例&#x…

使用wxPtyon和pillow开发拼图小游戏(一)

刚学习python&#xff0c;心血来潮&#xff0c;使用wxPython和pillow开了一个简单的拼图小游戏&#xff0c;大家分享一下 wxPython是Python语言的一套优秀的GUI图形库&#xff0c;在此项目里主要用来开发GUI客户页面&#xff1b;Pillow是一个非常好用的图像处理库&#xff0c;…

【学习】ChatGPT对问答社区产生了哪些影响?

引用 StackExchange 社区 CEO Prashanth Chandrasekar 的一篇博客标题 “Community is the future of AI”&#xff0c;引出本文的观点&#xff0c;即ChatGPT对问答社区产生了颠覆性影响&#xff0c;问答社区必须釜底抽薪、涅槃重生&#xff0c;但我们必须坚信“社区才是AI的未…