k8s調(diào)度原理和策略詳解
k8s調(diào)度器Scheduler
Scheduler工作原理
請(qǐng)求及Scheduler調(diào)度步驟:
- 節(jié)點(diǎn)預(yù)選(Predicate):排除完全不滿足條件的節(jié)點(diǎn),如內(nèi)存大小,端口等條件不滿足。
- 節(jié)點(diǎn)優(yōu)先級(jí)排序(Priority):根據(jù)優(yōu)先級(jí)選出最佳節(jié)點(diǎn)
- 節(jié)點(diǎn)擇優(yōu)(Select):根據(jù)優(yōu)先級(jí)選定節(jié)點(diǎn)

1.首先用戶通過(guò) Kubernetes 客戶端 Kubectl 提交創(chuàng)建 Pod 的 Yaml 的文件,向Kubernetes 系統(tǒng)發(fā)起資源請(qǐng)求,該資源請(qǐng)求被提交到Kubernetes 系統(tǒng)中,用戶通過(guò)命令行工具 Kubectl 向 Kubernetes 集群即 APIServer 用 的方式發(fā)送“POST”請(qǐng)求,即創(chuàng)建 Pod 的請(qǐng)求。
2.APIServer 接收到請(qǐng)求后把創(chuàng)建 Pod 的信息存儲(chǔ)到 Etcd 中,從集群運(yùn)行那一刻起,資源調(diào)度系統(tǒng) Scheduler 就會(huì)定時(shí)去監(jiān)控 APIServer
3.通過(guò) APIServer 得到創(chuàng)建 Pod 的信息,Scheduler 采用 watch 機(jī)制,一旦 Etcd 存儲(chǔ) Pod 信息成功便會(huì)立即通知APIServer,APIServer會(huì)立即把Pod創(chuàng)建的消息通知Scheduler,Scheduler發(fā)現(xiàn) Pod 的屬性中 Dest Node 為空時(shí)(Dest Node=””)便會(huì)立即觸發(fā)調(diào)度流程進(jìn)行調(diào)度。而這一個(gè)創(chuàng)建Pod對(duì)象,在調(diào)度的過(guò)程當(dāng)中有3個(gè)階段:節(jié)點(diǎn)預(yù)選、節(jié)點(diǎn)優(yōu)選、節(jié)點(diǎn)選定,從而篩選出最佳的節(jié)點(diǎn)
- 節(jié)點(diǎn)預(yù)選:基于一系列的預(yù)選規(guī)則對(duì)每個(gè)節(jié)點(diǎn)進(jìn)行檢查,將那些不符合條件的節(jié)點(diǎn)過(guò)濾,從而完成節(jié)點(diǎn)的預(yù)選
- 節(jié)點(diǎn)優(yōu)選:對(duì)預(yù)選出的節(jié)點(diǎn)進(jìn)行優(yōu)先級(jí)排序,以便選出最合適運(yùn)行Pod對(duì)象的節(jié)點(diǎn)
- 節(jié)點(diǎn)選定:從優(yōu)先級(jí)排序結(jié)果中挑選出優(yōu)先級(jí)最高的節(jié)點(diǎn)運(yùn)行Pod,當(dāng)這類節(jié)點(diǎn)多于1個(gè)時(shí),則進(jìn)行隨機(jī)選擇
k8s的調(diào)用工作方式
Kubernetes調(diào)度器作為集群的大腦,在如何提高集群的資源利用率、保證集群中服務(wù)的穩(wěn)定運(yùn)行中也會(huì)變得越來(lái)越重要Kubernetes的資源分為兩種屬性。
可壓縮資源(例如CPU循環(huán),Disk I/O帶寬)都是可以被限制和被回收的,對(duì)于一個(gè)Pod來(lái)說(shuō)可以降低這些資源的使用量而不去殺掉Pod。
不可壓縮資源(例如內(nèi)存、硬盤(pán)空間)一般來(lái)說(shuō)不殺掉Pod就沒(méi)法回收。未來(lái)Kubernetes會(huì)加入更多資源,如網(wǎng)絡(luò)帶寬,存儲(chǔ)IOPS的支持。
常用預(yù)選策略

常用優(yōu)先函數(shù)

節(jié)點(diǎn)親和性調(diào)度
節(jié)點(diǎn)親和性規(guī)則:硬親和性 required 、軟親和性 preferred。
Affinity 翻譯成中文是“親和性”,它對(duì)應(yīng)的是 Anti-Affinity,我們翻譯成“互斥”。這兩個(gè)詞比較形象,可以把 pod 選擇 node 的過(guò)程類比成磁鐵的吸引和互斥,不同的是除了簡(jiǎn)單的正負(fù)極之外,pod 和 node 的吸引和互斥是可以靈活配置的。
Affinity的優(yōu)點(diǎn):
- 匹配有更多的邏輯組合,不只是字符串的完全相等
- 調(diào)度分成軟策略(soft)和硬策略(hard),在軟策略下,如果沒(méi)有滿足調(diào)度條件的節(jié)點(diǎn),pod會(huì)忽略這條規(guī)則,繼續(xù)完成調(diào)度。
目前主要的node affinity:
requiredDuringSchedulingIgnoredDuringExecution
表示pod必須部署到滿足條件的節(jié)點(diǎn)上,如果沒(méi)有滿足條件的節(jié)點(diǎn),就不停重試。其中IgnoreDuringExecution表示pod部署之后運(yùn)行的時(shí)候,如果節(jié)點(diǎn)標(biāo)簽發(fā)生了變化,不再滿足pod指定的條件,pod也會(huì)繼續(xù)運(yùn)行。
requiredDuringSchedulingRequiredDuringExecution
表示pod必須部署到滿足條件的節(jié)點(diǎn)上,如果沒(méi)有滿足條件的節(jié)點(diǎn),就不停重試。其中RequiredDuringExecution表示pod部署之后運(yùn)行的時(shí)候,如果節(jié)點(diǎn)標(biāo)簽發(fā)生了變化,不再滿足pod指定的條件,則重新選擇符合要求的節(jié)點(diǎn)。
preferredDuringSchedulingIgnoredDuringExecution
表示優(yōu)先部署到滿足條件的節(jié)點(diǎn)上,如果沒(méi)有滿足條件的節(jié)點(diǎn),就忽略這些條件,按照正常邏輯部署。
preferredDuringSchedulingRequiredDuringExecution
表示優(yōu)先部署到滿足條件的節(jié)點(diǎn)上,如果沒(méi)有滿足條件的節(jié)點(diǎn),就忽略這些條件,按照正常邏輯部署。其中RequiredDuringExecution表示如果后面節(jié)點(diǎn)標(biāo)簽發(fā)生了變化,滿足了條件,則重新調(diào)度到滿足條件的節(jié)點(diǎn)。
軟策略和硬策略的區(qū)分是有用處的,硬策略適用于 pod 必須運(yùn)行在某種節(jié)點(diǎn),否則會(huì)出現(xiàn)問(wèn)題的情況,比如集群中節(jié)點(diǎn)的架構(gòu)不同,而運(yùn)行的服務(wù)必須依賴某種架構(gòu)提供的功能;軟策略不同,它適用于滿不滿足條件都能工作,但是滿足條件更好的情況,比如服務(wù)最好運(yùn)行在某個(gè)區(qū)域,減少網(wǎng)絡(luò)傳輸?shù)?。這種區(qū)分是用戶的具體需求決定的,并沒(méi)有絕對(duì)的技術(shù)依賴。
下面是一個(gè)官方的示例:
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: gcr.io/google_containers/pause:2.0
這個(gè) pod 同時(shí)定義了 requiredDuringSchedulingIgnoredDuringExecution 和 preferredDuringSchedulingIgnoredDuringExecution 兩種 nodeAffinity。第一個(gè)要求 pod 運(yùn)行在特定 AZ 的節(jié)點(diǎn)上,第二個(gè)希望節(jié)點(diǎn)最好有對(duì)應(yīng)的 another-node-label-key:another-node-label-value 標(biāo)簽。
這里的匹配邏輯是label在某個(gè)列表中,可選的操作符有:
- In: label的值在某個(gè)列表中
- NotIn:label的值不在某個(gè)列表中
- Exists:某個(gè)label存在
- DoesNotExist:某個(gè)label不存在
- Gt:label的值大于某個(gè)值(字符串比較)
- Lt:label的值小于某個(gè)值(字符串比較)
如果nodeAffinity中nodeSelector有多個(gè)選項(xiàng),節(jié)點(diǎn)滿足任何一個(gè)條件即可;如果matchExpressions有多個(gè)選項(xiàng),則節(jié)點(diǎn)必須同時(shí)滿足這些選項(xiàng)才能運(yùn)行pod 。
需要說(shuō)明的是,node并沒(méi)有anti-affinity這種東西,因?yàn)镹otIn和DoesNotExist能提供類似的功能。
節(jié)點(diǎn)軟親和性的權(quán)重
preferredDuringSchedulingIgnoredDuringExecution
柔性控制邏輯,當(dāng)條件不滿足時(shí),能接受被編排于其他不符合條件的節(jié)點(diǎn)之上
權(quán)重 weight 定義優(yōu)先級(jí),1-100 值越大優(yōu)先級(jí)越高
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy-with-node-affinity
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
name: myapp-pod
labels:
app: myapp
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution: #節(jié)點(diǎn)軟親和性
- weight: 60
preference:
matchExpressions:
- {key: zone, operator: In, values: ["foo"]}
- weight: 30
preference:
matchExpressions:
- {key: ssd, operator: Exists, values: []}
containers:
- name: myapp
image: ikubernetes/myapp:v1

Pod資源親和調(diào)度
Pod對(duì)象間親和性,將一些Pod對(duì)象組織在相近的位置(同一節(jié)點(diǎn)、機(jī)架、區(qū)域、地區(qū))
Pod對(duì)象間反親和性,將一些Pod在運(yùn)行位置上隔開(kāi)
調(diào)度器將第一個(gè)Pod放置于任何位置,然后與其有親和或反親和關(guān)系的Pod據(jù)此動(dòng)態(tài)完成位置編排
基于MatchInterPodAffinity預(yù)選策略完成節(jié)點(diǎn)預(yù)選,基于InterPodAffinityPriority優(yōu)選函數(shù)進(jìn)行各節(jié)點(diǎn)的優(yōu)選級(jí)評(píng)估
位置拓?fù)?/strong>,定義"同一位置"
Pod硬親和調(diào)度
requiredDuringSchedulingIgnoredDuringExecution
Pod親和性描述一個(gè)Pod與具有某特征的現(xiàn)存Pod運(yùn)行位置的依賴關(guān)系;即需要事先存在被依賴的Pod對(duì)象
# 被依賴Pod
kubectl run tomcat -l app=tomcat --image tomcat:alpine
kubectl explain pod.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution.topologyKey
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution: # 硬親和調(diào)度
- labelSelector:
matchExpressions: #集合選擇器
- {key: app, operator: In, values: ["tomcat"]} # 選擇被依賴Pod
# 上面意思是,當(dāng)前pod要跟標(biāo)簽為app值為tomcat的pod在一起
topologyKey: kubernetes.io/hostname # 根據(jù)挑選出的Pod所有節(jié)點(diǎn)的hostname作為同一位置的判定
containers:
- name: myapp
image: ikubernetes/myapp:v1
Pod軟親和調(diào)度
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-with-preferred-pod-affinity
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
name: myapp
labels:
app: myapp
spec:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 80
podAffinityTerm:
labelSelector:
matchExpressions:
- {key: app, operator: In, values: ["cache"]}
topologyKey: zone
- weight: 20
podAffinityTerm:
labelSelector:
matchExpressions:
- {key: app, operator: In, values: ["db"]}
topologyKey: zone
containers:
- name: myapp
image: ikubernetes/myapp:v1
Pod反親和調(diào)度
Pod反親和調(diào)度用于分散同一類應(yīng)用,調(diào)度至不同的區(qū)域、機(jī)架或節(jié)點(diǎn)等
將 spec.affinity.podAffinity替換為 spec.affinity.podAntiAffinity
反親和調(diào)度也分為柔性約束和強(qiáng)制約束
apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: fronted
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c", "sleep 3600"]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app, operator: In, values: ["myapp"]}
topologyKey: zone
污點(diǎn)和容忍度
污點(diǎn) taints 是定義在節(jié)點(diǎn)上的鍵值型屬性數(shù)據(jù),用于讓節(jié)點(diǎn)拒絕將Pod調(diào)度運(yùn)行于其上,除非Pod有接納節(jié)點(diǎn)污點(diǎn)的容忍度容忍度 tolerations 是定義在Pod上的鍵值屬性數(shù)據(jù),用于配置可容忍的污點(diǎn),且調(diào)度器將Pod調(diào)度至其能容忍該節(jié)點(diǎn)污點(diǎn)的節(jié)點(diǎn)上或沒(méi)有污點(diǎn)的節(jié)點(diǎn)上
使用PodToleratesNodeTaints預(yù)選策略和TaintTolerationPriority優(yōu)選函數(shù)完成該機(jī)制
- 節(jié)點(diǎn)親和性使得Pod對(duì)象被吸引到一類特定的節(jié)點(diǎn) (nodeSelector和affinity)
- 污點(diǎn)提供讓節(jié)點(diǎn)排斥特定Pod對(duì)象的能力
定義污點(diǎn)和容忍度
- 污點(diǎn)定義于nodes.spec.taints容忍度定義于pods.spec.tolerations
- 語(yǔ)法: key=value:effect
effect定義排斥等級(jí):
- NoSchedule,不能容忍,但僅影響調(diào)度過(guò)程,已調(diào)度上去的pod不受影響,僅對(duì)新增加的pod生效。
- PreferNoSchedule,柔性約束,節(jié)點(diǎn)現(xiàn)存Pod不受影響,如果實(shí)在是沒(méi)有符合的節(jié)點(diǎn),也可以調(diào)度上來(lái)
- NoExecute,不能容忍,當(dāng)污點(diǎn)變動(dòng)時(shí),Pod對(duì)象會(huì)被驅(qū)逐
在Pod上定義容忍度時(shí):
- 等值比較 容忍度與污點(diǎn)在key、value、effect三者完全匹配
- 存在性判斷 key、effect完全匹配,value使用空值
- 一個(gè)節(jié)點(diǎn)可配置多個(gè)污點(diǎn),一個(gè)Pod也可有多個(gè)容忍度
管理節(jié)點(diǎn)的污點(diǎn)
- 同一個(gè)鍵值數(shù)據(jù),effect不同,也屬于不同的污點(diǎn)
給節(jié)點(diǎn)添加污點(diǎn):
kubectl taint node <node-name> <key>=<value>:<effect> kubectl taint node node2 node-type=production:NoShedule #舉例
查看節(jié)點(diǎn)污點(diǎn):
kubectl get nodes <nodename> -o go-template={{.spec.taints}}
刪除節(jié)點(diǎn)污點(diǎn):
kubectl taint node <node-name> <key>[:<effect>]-
kubectl patch nodes <node-name> -p '{"spec":{"taints":[]}}'
kubectl taint node kube-node1 node-type=production:NoSchedule
kubectl get nodes kube-node1 -o go-template={{.spec.taints}}
#刪除key為node-type,effect為NoSchedule的污點(diǎn)
kubectl taint node kube-node1 node-type:NoSchedule-
刪除key為node-type的所有污點(diǎn)
kubectl taint node kube-node1 node-type-
刪除所有污點(diǎn)
kubectl patch nodes kube-node1 -p '{"spec":{"taints":[]}}'
給Pod對(duì)象容忍度
- spec.tolerations字段添加
- tolerationSeconds用于定義延遲驅(qū)逐Pod的時(shí)長(zhǎng)
等值判斷 tolerations: - key: "key1" operator: "Equal" #判斷條件為Equal value: "value1" effect: "NoExecute" tolerationSeconds: 3600 存在性判斷 tolerations: - key: "key1" operator: "Exists" #存在性判斷,只要污點(diǎn)鍵存在,就可以匹配 effect: "NoExecute" tolerationSeconds: 3600
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
domainNames: ""
exposeType: HostNetwork
io.daocloud/dce.ingress.metrics-port: "12955"
lbType: nginx
taintNodes: "false"
tpsLevel: "20000"
watchNamespace: ""
creationTimestamp: "2021-09-15T06:36:59Z"
generation: 1
labels:
ingress.loadbalancer.dce.daocloud.io/ingress-type: nginx
io.daocloud.dce.ingress.controller.name: ""
loadbalancer.dce.daocloud.io/adapter: ingress
loadbalancer.dce.daocloud.io/instance: lb01
resource.ingress.loadbalancer.dce.daocloud.io: lb01
name: lb01-ingress1
namespace: kube-system
resourceVersion: "28248"
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/lb01-ingress1
uid: 760b31d3-66b0-4547-8438-fb7f972344a5
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
name.ingress.loadbalancer.dce.daocloud.io/lb01: enabled
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "12955"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
ingress.loadbalancer.dce.daocloud.io/ingress-type: nginx
name.ingress.loadbalancer.dce.daocloud.io/lb01: enabled
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/hostname
operator: In
values:
- dce-172-16-17-21
- dce-172-16-17-22
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: name.ingress.loadbalancer.dce.daocloud.io/lb01
operator: In
values:
- enabled
topologyKey: kubernetes.io/hostname
containers:
- args:
- /nginx-ingress-controller
- --healthz-port=12955
- --status-port=12989
- --https-port=12973
- --default-server-port=12956
- --stream-port=12943
- --profiler-port=12963
- --http-port=12987
- --configmap=kube-system/lb01-ingress
- --default-ssl-certificate=kube-system/lb01-ingress
- --tcp-services-configmap=kube-system/lb01-tcp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: 172.16.17.250/kube-system/dce-ingress-controller:v0.46.0-1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 12955
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: nginx-ingress-controller
ports:
- containerPort: 12955
hostPort: 12955
name: healthz
protocol: TCP
- containerPort: 12989
hostPort: 12989
name: status
protocol: TCP
- containerPort: 12973
hostPort: 12973
name: https
protocol: TCP
- containerPort: 12956
hostPort: 12956
name: default-server
protocol: TCP
- containerPort: 12943
hostPort: 12943
name: stream
protocol: TCP
- containerPort: 12963
hostPort: 12963
name: profiler
protocol: TCP
- containerPort: 12987
hostPort: 12987
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 12955
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: "1"
memory: 512Mi
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log
name: log-volume
dnsPolicy: ClusterFirst
hostNetwork: true
initContainers:
- args:
- -c
- ' mkdir -p /var/log/nginx; chown -hR 101:101 /var/log/nginx; mkdir -p /var/log/dce-ingress; chown
-hR 101:101 /var/log/dce-ingress; echo init-done;'
command:
- /bin/sh
image: 172.16.17.250/kube-system/dce-busybox:1.30.1
imagePullPolicy: IfNotPresent
name: init
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log/
name: log-volume
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: lb01
serviceAccountName: lb01
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/load-balance
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
volumes:
- hostPath:
path: /var/log/
type: Directory
name: log-volume
問(wèn)題節(jié)點(diǎn)標(biāo)識(shí)
自動(dòng)為節(jié)點(diǎn)添加污點(diǎn)信息,使用NoExecute效用標(biāo)識(shí),會(huì)驅(qū)逐現(xiàn)有Pod
K8s核心組件通常都容忍此類污點(diǎn)
- node.kubernetes.io/not-ready 節(jié)點(diǎn)進(jìn)入NotReady狀態(tài)時(shí)自動(dòng)添加
- node.alpha.kubernetes.io/unreachable 節(jié)點(diǎn)進(jìn)入NotReachable狀態(tài)時(shí)自動(dòng)添加
- node.kubernetes.io/out-of-disk 節(jié)點(diǎn)進(jìn)入OutOfDisk狀態(tài)時(shí)自動(dòng)添加
- node.kubernetes.io/memory-pressure 節(jié)點(diǎn)內(nèi)存資源面臨壓力
- node.kubernetes.io/disk-pressure 節(jié)點(diǎn)磁盤(pán)面臨壓力
- node.kubernetes.io/network-unavailable 節(jié)點(diǎn)網(wǎng)絡(luò)不可用
- node.cloudprovider.kubernetes.io/uninitialized kubelet由外部云環(huán)境程序啟動(dòng)時(shí),自動(dòng)添加,待到去控制器初始化此節(jié)點(diǎn)時(shí)再將其刪除
Pod優(yōu)選級(jí)和搶占式調(diào)度
優(yōu)選級(jí),Pod對(duì)象的重要程度
優(yōu)選級(jí)會(huì)影響節(jié)點(diǎn)上Pod的調(diào)度順序和驅(qū)逐次序
一個(gè)Pod對(duì)象無(wú)法被調(diào)度時(shí),調(diào)度器會(huì)嘗試搶占(驅(qū)逐)較低優(yōu)先級(jí)的Pod對(duì)象,以便可以調(diào)度當(dāng)前Pod
Pod優(yōu)選級(jí)和搶占機(jī)制默認(rèn)處于禁用狀態(tài)
- 啟用:同時(shí)為kube-apiserver、kube-scheduler、kubelet程序的 --feature-gates 添加 PodPriority=true
- 使用:事先創(chuàng)建優(yōu)先級(jí)類別,并在創(chuàng)建Pod資源時(shí)通過(guò) priorityClassName屬性指定所屬的優(yōu)選級(jí)類別
總結(jié)
以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
相關(guān)文章
阿里云oss對(duì)象存儲(chǔ)使用詳細(xì)步驟
本文主要介紹了阿里云oss對(duì)象存儲(chǔ)使用詳細(xì)步驟,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2022-06-06
K8s學(xué)習(xí)之Pod的定義及詳細(xì)資源調(diào)用案例
Kubernetes將所有內(nèi)容抽象為資源,通過(guò)操作資源管理集群,核心單元是Pod,通過(guò)控制器管理Pod,資源管理分為命令式對(duì)象管理、命令式對(duì)象配置和聲明式對(duì)象配置,各有適用場(chǎng)景,需要的朋友可以參考下2024-09-09
Kubernetes存儲(chǔ)系統(tǒng)數(shù)據(jù)持久化管理詳解
這篇文章主要為大家介紹了Kubernetes存儲(chǔ)系統(tǒng)數(shù)據(jù)持久化管理詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-11-11
阿里云kubernetes查找鏡像中jar包的方法(docker查看鏡像中的jar)
這篇文章主要給大家介紹了關(guān)于阿里云kubernetes查找鏡像中jar包的方法,也就是在docker查看鏡像中的jar,文中通過(guò)圖文介紹的非常詳細(xì),需要的朋友可以參考下2022-09-09
詳解Kubernetes 中容器跨主機(jī)網(wǎng)絡(luò)
這篇文章主要為大家介紹了Kubernetes中容器跨主機(jī)網(wǎng)絡(luò)是怎么樣的,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-04-04
Kubernetes之Pod的調(diào)度實(shí)現(xiàn)方式
Kubernetes通過(guò)定向調(diào)度(NodeName/NodeSelector)、親和性調(diào)度(NodeAffinity/PodAffinity/PodAntiAffinity)及污點(diǎn)容忍(Taints/Toleration)實(shí)現(xiàn)Pod節(jié)點(diǎn)控制,分別用于強(qiáng)制指定節(jié)點(diǎn)、優(yōu)化部署位置和靈活管理節(jié)點(diǎn)準(zhǔn)入,滿足不同場(chǎng)景下的調(diào)度需求2025-09-09

