文章目录
- 驱逐原因
- 问题复现
- 解决方案
在Kubernetes中,pod是最小的调度单元。当Pod无法在所分配的节点上正常运行时,它可能会被驱逐(evicted)。这种情况可能是由多种原因引起,比如节点资源不足、Pod超出了所分配的资源限制、镜像拉取失败等。
驱逐原因
- 节点资源不足:节点资源包括CPU、内存、存储空间等。当节点资源不足以满足Pod的资源需求时,调度器会选择其中一个或多个Pod驱逐出节点。
- 超出资源限制:Pod可以指定资源上线,例如CPU和内存的限制。如果Pod使用的资源超过指定的限制,该Pod可能会被驱逐。
- 镜像拉取失败:如果Pod所需的镜像无法拉取或拉取失败,K8S可能会重试一定次数后将其标记为“evicted”状态。
问题复现
(1)查看K8S中Pod的状态,发现部分pod的STATUS显示“evicted”状态:
(2)使用 kubectl describe 命令查看Pod的状态和事件。
kubectl describe pod <pod-name> -n <namespace-name>
kubectl describe pod knowledge-base-5799896986-zl55n -n asrank
[root@node-1 ~]# kubectl describe pod knowledge-base-5799896986-zl55n -n asrank
Name: knowledge-base-5799896986-zl55n
Namespace: asrank
Priority: 0
Node: node-1/
Start Time: Fri, 29 Mar 2024 11:12:16 +0800
Labels: app=knowledge-baseistio.io/rev=defaultpod-template-hash=5799896986security.istio.io/tlsMode=istioservice.istio.io/canonical-name=knowledge-baseservice.istio.io/canonical-revision=latest
Annotations: kubectl.kubernetes.io/default-container: knowledge-basekubectl.kubernetes.io/default-logs-container: knowledge-baseprometheus.io/path: /stats/prometheusprometheus.io/port: 15020prometheus.io/scrape: truesidecar.istio.io/status:{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istiod-ca-cert"],"ima...
Status: Failed
Reason: Evicted
Message: The node was low on resource: memory. Container knowledge-base was using 43775192Ki, which exceeds its request of 5000Mi.
IP:
IPs: <none>
Controlled By: ReplicaSet/knowledge-base-5799896986
Init Containers:istio-init:Image: hub.pmlabs.com.cn/sail/istio/proxyv2:1.10.6Port: <none>Host Port: <none>Args:istio-iptables-p15001-z15006-u1337-mREDIRECT-i*-x-b*-d15090,15021,15020Limits:cpu: 2memory: 1GiRequests:cpu: 100mmemory: 128MiEnvironment: <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from default-token-cfzc5 (ro)
Containers:knowledge-base:Image: hub.pmlabs.com.cn/asrank/knowledge-base:1.01.201Port: 9303/TCPHost Port: 0/TCPLimits:cpu: 24memory: 48GiRequests:cpu: 1memory: 5000MiEnvironment:PROFILE: prodMYSQL_PASSWORD: <set to the key 'MYSQL_ROOT_PASSWORD' in secret 'mysql-root-password'> Optional: falseMounts:/data/ from datasets (rw)/etc/localtime from date-config (rw)/log/ from app-log (rw)/var/run/secrets/kubernetes.io/serviceaccount from default-token-cfzc5 (ro)istio-proxy:Image: hub.pmlabs.com.cn/sail/istio/proxyv2:1.10.6Port: 15090/TCPHost Port: 0/TCPArgs:proxysidecar--domain$(POD_NAMESPACE).svc.cluster.local--serviceClusterknowledge-base.$(POD_NAMESPACE)--proxyLogLevel=warning--proxyComponentLogLevel=misc:error--log_output_level=default:info--concurrency2Limits:cpu: 2memory: 1GiRequests:cpu: 100mmemory: 128MiReadiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30Environment:JWT_POLICY: first-party-jwtPILOT_CERT_PROVIDER: istiodCA_ADDR: istiod.istio-system.svc:15012POD_NAME: knowledge-base-5799896986-zl55n (v1:metadata.name)POD_NAMESPACE: asrank (v1:metadata.namespace)INSTANCE_IP: (v1:status.podIP)SERVICE_ACCOUNT: (v1:spec.serviceAccountName)HOST_IP: (v1:status.hostIP)CANONICAL_SERVICE: (v1:metadata.labels['service.istio.io/canonical-name'])CANONICAL_REVISION: (v1:metadata.labels['service.istio.io/canonical-revision'])PROXY_CONFIG: {}ISTIO_META_POD_PORTS: [{"containerPort":9303,"protocol":"TCP"}]ISTIO_META_APP_CONTAINERS: knowledge-baseISTIO_META_CLUSTER_ID: KubernetesISTIO_META_INTERCEPTION_MODE: REDIRECTISTIO_META_WORKLOAD_NAME: knowledge-baseISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/asrank/deployments/knowledge-baseISTIO_META_MESH_ID: cluster.localTRUST_DOMAIN: cluster.localMounts:/etc/istio/pod from istio-podinfo (rw)/etc/istio/proxy from istio-envoy (rw)/var/lib/istio/data from istio-data (rw)/var/run/secrets/istio from istiod-ca-cert (rw)/var/run/secrets/kubernetes.io/serviceaccount from default-token-cfzc5 (ro)
Volumes:istio-envoy:Type: EmptyDir (a temporary directory that shares a pod's lifetime)Medium: MemorySizeLimit: <unset>istio-data:Type: EmptyDir (a temporary directory that shares a pod's lifetime)Medium: SizeLimit: <unset>istio-podinfo:Type: DownwardAPI (a volume populated by information about the pod)Items:metadata.labels -> labelsmetadata.annotations -> annotationslimits.cpu -> cpu-limitrequests.cpu -> cpu-requestistiod-ca-cert:Type: ConfigMap (a volume populated by a ConfigMap)Name: istio-ca-root-certOptional: falsedate-config:Type: HostPath (bare host directory volume)Path: /etc/localtimeHostPathType: datasets:Type: HostPath (bare host directory volume)Path: /var/asrank/knowledge-base/dataHostPathType: app-log:Type: HostPath (bare host directory volume)Path: /var/log/app/asrank/knowledge-baseHostPathType: default-token-cfzc5:Type: Secret (a volume populated by a Secret)SecretName: default-token-cfzc5Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 2snode.kubernetes.io/unreachable:NoExecute op=Exists for 2s
Events:Type Reason Age From Message---- ------ ---- ---- -------Warning Evicted 59m kubelet The node was low on resource: memory. Container knowledge-base was using 43775192Ki, which exceeds its request of 5000Mi.Normal Killing 59m kubelet Stopping container knowledge-baseNormal Killing 59m kubelet Stopping container istio-proxy
在Events事件中可以看到,由于该Pod在运行中所需内存超出deployments中request的内存,导致该Pod的container被killed,Pod变为被驱逐(evicted)状态。
Events:Type Reason Age From Message---- ------ ---- ---- -------Warning Evicted 59m kubelet The node was low on resource: memory. Container knowledge-base was using 43775192Ki, which exceeds its request of 5000Mi.Normal Killing 59m kubelet Stopping container knowledge-baseNormal Killing 59m kubelet Stopping container istio-proxy
解决方案
- 分析Pod资源使用情况:检查被驱逐的Pod的资源使用情况,如内存、CPU和磁盘使用率。可以使用kubectl describe pod <pod_name>命令查看Pod的状态和事件。
- 调整资源限制:根据实际需求调整Pod的资源限制,如增加内存限制或CPU限制。可以在Pod YAML文件中修改资源限制,然后使用kubectl apply -f <pod_yaml_file>命令更新Pod。
- 扩容节点:如果集群中的所有节点都面临资源不足的情况,可以考虑扩容节点以提供更多资源。可以使用云服务提供商的管理控制台或API扩容节点。
- 优化应用:优化应用程序以减少资源使用,如减少内存泄漏、优化CPU使用等。
- 使用优先级和抢占:为Pod设置优先级,以便在资源紧张时根据优先级驱逐Pod。可以在Pod的YAML文件中设置priorityClassName字段。
- 批量清理
根据上述分析,本次事件是由于资源不足导致的。可以根据实际需求调整Pod的资源限制,在Pod的YAML文件中修改资源限制,然后使用kubectl apply -f <pod_yaml_file>命令更新Pod。
有k9s界面的情况下,也可进行如下操作:
(1)基于k9s,输入 :deployments
(2)选择对应的Pod,按 e 键进入编辑模式
(3)修改 resource 资源信息,使得满足程序运行所需资源大小。如:
requests:cpu: 10memory: 5Gilimits:cpu: 24memory: 48Gi
(4)清理Pod
确认没问题后再做删除被驱逐(evicted)的Pod。
# 查看被驱逐(evicted)的Pod
kubectl get pods -n <namespace-name> |grep Evicted
# 执行批量删除
kubectl get pods -n <namespace-name> | grep Evicted | awk '{print $1}' | xargs kubectl delete pod -n <namespace-name>