调度框架 Scheduling Framework 实践

2024-03-15 12:58

本文主要是介绍调度框架 Scheduling Framework 实践,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

调度框架 Scheduling Framework

架构设计

工作流程图可以查看 

工作流程图

相关文档参见 sig-scheduling

前提

调度框架定义了一组扩展点,用户可以实现扩展点定义的接口来定义自己的调度逻辑,并将扩展注册到扩展点上,调度框架在执行调度工作流时,遇到对应的扩展点时,将调用用户注册的扩展。

调度 Pod 时一般会有两个步骤:调度过程和绑定过程。

将调度过程和绑定过程合在一起,称之为调度上下文(scheduling context)。调度是同步的,绑定过程是异步运行的。 调度过程和绑定过程遇到如下情况时会中途退出,调度程序认为当前没有该 Pod 的可选节点或者产生内部错误,该 Pod 将被放回到待调度队列,并等待下次重试。

Pod Scheduling Context

Scheduling Cycle

[QueueSort Pre-filter Filter Post-filter Scoring Normalize scoring Reserve]

QueueSort 扩展用于对 Pod 的待调度队列进行排序,以决定先调度哪个 Pod,QueueSort 扩展本质上只需要实现一个方法 Less(Pod1, Pod2) 用于比较两个 Pod 谁更优先获得调度即可,同一时间点只能有一个 QueueSort 插件生效。

Pre-filter 扩展用于对 Pod 的信息进行预处理,检查一些集群或 Pod 必须满足的前提条件,如果 pre-filter 返回了 error,则调度过程终止。

Filter 扩展用于排除那些不能运行该 Pod 的节点,对于每一个节点,调度器将按顺序执行 filter 扩展;如果任何一个 filter 将节点标记为不可选,则余下的 filter 扩展将不会被执行。调度器可以同时对多个节点执行 filter 扩展。

Post-filter 是一个通知类型的扩展点,调用该扩展的参数是 filter 阶段结束后被筛选为可选节点的节点列表,可以在扩展中使用这些信息更新内部状态,或者产生日志或 metrics 信息。

Scoring 扩展用于为所有可选节点进行打分,调度器将针对每一个节点调用 Scoring 扩展,评分结果是一个范围内的整数。在 normalize scoring 阶段,调度器将会把每个 scoring 扩展对具体某个节点的评分结果和该扩展的权重合并起来,作为最终评分结果。

Normalize scoring 扩展在调度器对节点进行最终排序之前修改每个节点的评分结果,注册到该扩展点的扩展在被调用时,将获得同一个插件中的 scoring 扩展的评分结果作为参数,调度框架每执行一次调度,都将调用所有插件中的一个 normalize scoring 扩展一次。

Reserve 是一个通知性质的扩展点,有状态的插件可以使用该扩展点来获得节点上为 Pod 预留的资源,该事件发生在调度器将 Pod 绑定到节点之前,目的是避免调度器在等待 Pod 与节点绑定的过程中调度新的 Pod 到节点上时,发生实际使用资源超出可用资源的情况。(因为绑定 Pod 到节点上是异步发生的)。这是调度过程的最后一个步骤,Pod 进入 reserved 状态以后,要么在绑定失败时触发 Unreserve 扩展,要么在绑定成功时,由 Post-bind 扩展结束绑定过程。

Permit Permit 扩展用于阻止或者延迟 Pod 与节点的绑定,Permit 扩展可以做下面三件事中的一项:

approve(批准):当所有的 permit 扩展都 approve 了 Pod 与节点的绑定,调度器将继续执行绑定过程
deny(拒绝):如果任何一个 permit 扩展 deny 了 Pod 与节点的绑定,Pod 将被放回到待调度队列,此时将触发 Unreserve 扩展
wait(等待):如果一个 permit 扩展返回了 wait,则 Pod 将保持在 permit 阶段,直到被其他扩展 approve,如果超时事件发生,wait 状态变成 deny,Pod 将被放回到待调度队列,此时将触发 Unreserve 扩展。

Binding Cycle

[Pre-bind Bind Post-bind Unreserve]

Pre-bind 扩展用于在 Pod 绑定之前执行某些逻辑。例如,pre-bind 扩展可以将一个基于网络的数据卷挂载到节点上,以便 Pod 可以使用。如果任何一个 pre-bind 扩展返回错误,Pod 将被放回到待调度队列,此时将触发 Unreserve 扩展。

Bind 扩展用于将 Pod 绑定到节点上。

只有所有的 pre-bind 扩展都成功执行了,bind 扩展才会执行
调度框架按照 bind 扩展注册的顺序逐个调用 bind 扩展
具体某个 bind 扩展可以选择处理或者不处理该 Pod
如果某个 bind 扩展处理了该 Pod 与节点的绑定,余下的 bind 扩展将被忽略

Post-bind 是一个通知性质的扩展。

Post-bind 扩展在 Pod 成功绑定到节点上之后被动调用
Post-bind 扩展是绑定过程的最后一个步骤,可以用来执行资源清理的动作

Unreserve 是一个通知性质的扩展,如果为 Pod 预留了资源,Pod 又在被绑定过程中被拒绝绑定, 则 unreserve 扩展将被调用。Unreserve 扩展应该释放已经为 Pod 预留的节点上的计算资源。 在一个插件中,reserve 扩展和 unreserve 扩展应该成对出现。

对应的拓展点接口可见于 kubernetes/kubernetes pkg/scheduler/framework/v1alpha1/interface.go

源码中的简单示例可参见 kubernetes/kubernetes pkg/scheduler/framework/plugins/examples

示例

本示例将实现对应的扩展点,然后将插件注册到调度器中,下面是默认调度器在初始化的时候注册插件。

func NewRegistry() Registry {return Registry{// FactoryMap:// New plugins are registered here.// example:// {//  stateful_plugin.Name: stateful.NewStatefulMultipointExample,//  fooplugin.Name: fooplugin.New,// }}
}

可以看到默认中并没有注册一些插件,所以要想让调度器能够识别我们的插件代码, 就需要自己来实现一个调度器了,当然这个调度器我们完全没必要自己实现, 直接调用默认的调度器,然后在上面的 NewRegistry() 函数中将我们的插件注册进去即可。 在 kube-scheduler 的源码文件 kubernetes/cmd/kube-scheduler/app/server.go 中有一个 NewSchedulerCommand 入口函数,其中的参数是一个类型为 Option 的列表, 而这个 Option 恰好就是一个插件配置的定义。

// Option configures a framework.Registry.
type Option func(framework.Registry) error// NewSchedulerCommand creates a *cobra.Command object with default parameters and registryOptions
func NewSchedulerCommand(registryOptions ...Option) *cobra.Command {......
}

我们完全就可以直接调用这个函数来作为我们的函数入口,并且传入我们自己实现的插件作为参数即可, 而且该文件下面还有一个名为 WithPlugin 的函数可以来创建一个 Option 实例。

// WithPlugin creates an Option based on plugin name and factory.
func WithPlugin(name string, factory framework.PluginFactory) Option {return func(registry framework.Registry) error {return registry.Register(name, factory)}
}type PluginFactory = func(configuration *runtime.Unknown, f FrameworkHandle) (Plugin, error)

sample.New 实际上就是上面的这个函数,在这个函数中我们可以获取到插件中的一些数据然后进行逻辑处理即可, 插件实现如下所示,我们这里只是简单获取下数据打印日志,如果你有实际需求的可以根据获取的数据就行处理即可。 我们这里只是实现了 PreFilter、Filter、PreBind 三个扩展点,其它的可以用同样的方式来扩展即可。

说明

编译镜像之前请先 go build

docker build -t my-scheduler-framework-demo/sample-scheduler:1.0 .

调度器 yaml 文件如下:

# ClusterRole 集群角色
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: sample-scheduler-clusterrole
rules:- apiGroups:- ""resources:- endpoints- eventsverbs:- create- get- update- apiGroups:- ""resources:- nodesverbs:- get- list- watch- apiGroups:- ""resources:- podsverbs:- delete- get- list- watch- update- apiGroups:- ""resources:- bindings- pods/bindingverbs:- create- apiGroups:- ""resources:- pods/statusverbs:- patch- update- apiGroups:- ""resources:- replicationcontrollers- servicesverbs:- get- list- watch- apiGroups:- apps- extensionsresources:- replicasetsverbs:- get- list- watch- apiGroups:- appsresources:- statefulsetsverbs:- get- list- watch- apiGroups:- policyresources:- poddisruptionbudgetsverbs:- get- list- watch- apiGroups:- ""resources:- persistentvolumeclaims- persistentvolumesverbs:- get- list- watch- apiGroups:- ""resources:- configmapsverbs:- get- list- watch- apiGroups:- "storage.k8s.io"resources:- storageclasses- csinodesverbs:- get- list- watch- apiGroups:- "coordination.k8s.io"resources:- leasesverbs:- create- get- list- update- apiGroups:- "events.k8s.io"resources:- eventsverbs:- create- patch- update
---
apiVersion: v1
kind: ServiceAccount
metadata:name: sample-scheduler-sanamespace: kube-system
---
# 角色绑定 即这个角色拥有的资源权限
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: sample-scheduler-clusterrolebindingnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: sample-scheduler-clusterrole
subjects:
- kind: ServiceAccountname: sample-scheduler-sanamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: scheduler-confignamespace: kube-system
data:scheduler-config.yaml: |apiVersion: kubescheduler.config.k8s.io/v1alpha1kind: KubeSchedulerConfigurationschedulerName: sample-schedulerleaderElection:leaderElect: truelockObjectName: sample-schedulerlockObjectNamespace: kube-systemplugins:preFilter:enabled:- name: "sample-plugin"filter:enabled:- name: "sample-plugin"preBind:enabled:- name: "sample-plugin"pluginConfig:- name: "sample-plugin"args:favorite_color: "#326CE5"favorite_number: 7thanks_to: "tanjunchen"
---
apiVersion: apps/v1
kind: Deployment
metadata:name: sample-schedulernamespace: kube-systemlabels:component: sample-scheduler
spec:replicas: 1selector:matchLabels:component: sample-schedulertemplate:metadata:labels:component: sample-schedulerspec:serviceAccount: sample-scheduler-sapriorityClassName: system-cluster-criticalvolumes:- name: scheduler-configconfigMap:name: scheduler-configcontainers:- name: scheduler-ctrlimage: tanjunchen/sample-scheduler:1.0imagePullPolicy: IfNotPresentargs:- sample-scheduler-framework- --config=/etc/kubernetes/scheduler-config.yaml- --v=3resources:requests:cpu: "50m"volumeMounts:- name: scheduler-configmountPath: /etc/kubernetes

直接部署 kubectl -f apply yaml 上面的资源对象即可,这样我们就部署了一个名为 sample-scheduler 的调度器。

测试 yaml 文件如下:

apiVersion: apps/v1
kind: Deployment
metadata:name: test-scheduler
spec:replicas: 1selector:matchLabels:app: test-schedulertemplate:metadata:labels:app: test-schedulerspec:# schedulerName 对应于自身定义的调度器的名称schedulerName: sample-schedulercontainers:- image: nginximagePullPolicy: IfNotPresentname: nginxports:- containerPort: 80

执行以下命令查看 pod 启动情况

kubectl get pods -n kube-system -l component=sample-scheduler

执行以下命令查看 pod 调度日志

kubectl logs -f pod-name -n kube-system

I0226 13:22:06.585043       1 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.585414       1 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.585694       1 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.585949       1 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.594623       1 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.594836       1 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.594961       1 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.595078       1 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.600079       1 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.600360       1 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0226 13:22:06.600510       1 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250
I0226 13:22:12.610045       1 node_tree.go:93] Added node "k8s-master" in group "" to NodeTree
I0226 13:22:12.610118       1 node_tree.go:93] Added node "node01" in group "" to NodeTree
I0226 13:22:12.610161       1 node_tree.go:93] Added node "node02" in group "" to NodeTree
I0226 13:22:12.649440       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/sample-scheduler...
I0226 13:22:30.229500       1 leaderelection.go:251] successfully acquired lease kube-system/sample-scheduler
I0226 13:37:59.770417       1 scheduler.go:530] Attempting to schedule pod: default/test-scheduler-6d779d9465-vsvxh
I0226 13:37:59.771224       1 plugins.go:30] prefilter pod: test-scheduler-6d779d9465-vsvxh
I0226 13:37:59.774870       1 plugins.go:35] filter pod: test-scheduler-6d779d9465-vsvxh, node: k8s-master
I0226 13:37:59.775419       1 plugins.go:35] filter pod: test-scheduler-6d779d9465-vsvxh, node: node01
I0226 13:37:59.775436       1 plugins.go:35] filter pod: test-scheduler-6d779d9465-vsvxh, node: node02
I0226 13:37:59.782129       1 plugins.go:43] prebind node info: &Node{ObjectMeta:{node02   /api/v1/nodes/node02 ccf12cec-5234-4eb7-820f-a420ffbf1a2f 49340 0 2020-02-25 10:00:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:node02 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:192.168.17.132/24 projectcalico.org/IPv4IPIPTunnelAddr:192.168.140.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{40465752064 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3953999872 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{36419176798 0} {<nil>} 36419176798 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3849142272 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-02-25 10:00:06 +0000 UTC,LastTransitionTime:2020-02-25 10:00:06 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-26 13:37:17 +0000 UTC,LastTransitionTime:2020-02-25 10:00:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-26 13:37:17 +0000 UTC,LastTransitionTime:2020-02-25 10:00:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-26 13:37:17 +0000 UTC,LastTransitionTime:2020-02-25 10:00:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-26 13:37:17 +0000 UTC,LastTransitionTime:2020-02-25 10:00:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.17.132,},NodeAddress{Type:Hostname,Address:node02,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d492c6ab0e5c4a32a5a2407fd619f919,SystemUUID:A6D24D56-387E-EE32-06D3-50E47141E69C,BootID:e8c01186-8897-4812-93cc-cd27fde17fa3,KernelVersion:3.10.0-1062.4.3.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.5,KubeletVersion:v1.16.3,KubeProxyVersion:v1.16.3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[perl@sha256:89b9d23c03a95d4f7995e4fbcc4811cf0286f93338aca9407ec1ff525e325b73 perl:latest],SizeBytes:857108231,},ContainerImage{Names:[mysql@sha256:6d0741319b6a2ae22c384a97f4bbee411b01e75f6284af0cce339fee83d7e314 mysql:latest],SizeBytes:465244873,},ContainerImage{Names:[kubeguide/tomcat-app@sha256:7a9193c2e5c6c74b4ad49a8abbf75373d4ab76c8f8db87672dc526b96ac69ac4 kubeguide/tomcat-app:v1],SizeBytes:358241257,},ContainerImage{Names:[tomcat@sha256:8ecb10948deb32c34aeadf7bf95d12a93fbd3527911fa629c1a3e7823b89ce6f tomcat:8.0],SizeBytes:356245923,},ContainerImage{Names:[mysql@sha256:bef096aee20d73cbfd87b02856321040ab1127e94b707b41927804776dca02fc mysql:5.6],SizeBytes:302490673,},ContainerImage{Names:[allingeek/ch6_ipc@sha256:eeeb7255a341751d31fbcfbeef0b27a9f02e61819ac6ab4b95bcdbd223b5f250 allingeek/ch6_ipc:latest],SizeBytes:279090357,},ContainerImage{Names:[calico/node@sha256:887bcd551668cccae1fbfd6d2eb0f635ec37bb4cf599e1169989aa49dfac5b57 calico/node:v3.11.2],SizeBytes:255343962,},ContainerImage{Names:[calico/node@sha256:8ee677fa0969bf233deb9d9e12b5f2840a0e64b7d6acaaa8ac526672896b8e3c calico/node:v3.11.1],SizeBytes:225848439,},ContainerImage{Names:[calico/cni@sha256:f5808401a96ba93010b9693019496d88070dde80dda6976d10bc4328f1f18f4e calico/cni:v3.11.2],SizeBytes:204185753,},ContainerImage{Names:[calico/cni@sha256:e493af94c8385fdfbce859dd15e52d35e9bf34a0446fec26514bb1306e323c17 calico/cni:v3.11.1],SizeBytes:197763545,},ContainerImage{Names:[calico/node@sha256:441abbaeaa2a03d529687f8da49dab892d91ca59f30c000dfb5a0d6a7c2ede24 calico/node:v3.10.1],SizeBytes:192029475,},ContainerImage{Names:[nginx@sha256:f2d384a6ca8ada733df555be3edc427f2e5f285ebf468aae940843de8cf74645 nginx:1.11.9],SizeBytes:181819831,},ContainerImage{Names:[nginx@sha256:35779791c05d119df4fe476db8f47c0bee5943c83eba5656a15fc046db48178b nginx:1.10.1],SizeBytes:180708613,},ContainerImage{Names:[httpd@sha256:ac6594daaa934c4c6ba66c562e96f2fb12f871415a9b7117724c52687080d35d httpd:latest],SizeBytes:165254767,},ContainerImage{Names:[calico/cni@sha256:dad425d218fd33b23a3929b7e6a31629796942e9e34b13710d7be69cea35cb22 calico/cni:v3.10.1],SizeBytes:163333600,},ContainerImage{Names:[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx:latest],SizeBytes:126323486,},ContainerImage{Names:[debian@sha256:79f0b1682af1a6a29ff63182c8103027f4de98b22d8fb50040e9c4bb13e3de78 debian:latest],SizeBytes:114052312,},ContainerImage{Names:[calico/pod2daemon-flexvol@sha256:93c64d6e3e0a0dc75d1b21974db05d28ef2162bd916b00ce62a39fd23594f810 calico/pod2daemon-flexvol:v3.11.2],SizeBytes:111122324,},ContainerImage{Names:[calico/pod2daemon-flexvol@sha256:4757a518c0cd54d3cad9481c943716ae86f31cdd57008afc7e8820b1821a74b9 calico/pod2daemon-flexvol:v3.11.1],SizeBytes:111122324,},ContainerImage{Names:[nginx@sha256:23b4dcdf0d34d4a129755fc6f52e1c6e23bb34ea011b315d87e193033bcd1b68 nginx:1.15],SizeBytes:109331233,},ContainerImage{Names:[nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d nginx:1.14],SizeBytes:109129446,},ContainerImage{Names:[registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx@sha256:d9b43ba0db2f6a02ce843d9c2d68558e514864dec66d34b9dd82ab9a44f16671 registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v1],SizeBytes:109057403,},ContainerImage{Names:[registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx@sha256:31da21eb24615dacdd8d219e1d43dcbaea6d78f6f8548ce50c3f56699312496b registry.cn-beijing.aliyuncs.com/mrvolleyball/nginx:v2],SizeBytes:109057403,},ContainerImage{Names:[nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 nginx:1.7.9],SizeBytes:91664166,},ContainerImage{Names:[registry.aliyuncs.com/google_containers/kube-proxy@sha256:1a1b21354e31190c0a1c6b0e16485ec095e7a4d423620e4381c3982ebfa24b3a registry.aliyuncs.com/google_containers/kube-proxy:v1.16.3],SizeBytes:86065116,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:3fa662e491a5e797c789afbd6d5694bdd186111beb7b5c9d66655448a7d3ae37 quay.io/coreos/flannel:v0.11.0],SizeBytes:52567296,},ContainerImage{Names:[calico/kube-controllers@sha256:46951fa7f713dfb0acc6be5edb82597df6f31ddc4e25c4bc9db889e894d02dd7 calico/kube-controllers:v3.11.1],SizeBytes:52477980,},ContainerImage{Names:[calico/kube-controllers@sha256:1169cca40b489271714cb1e97fed9b6b198aabdca1a1cc61698dd73ee6703d60 calico/kube-controllers:v3.11.2],SizeBytes:52477980,},ContainerImage{Names:[registry.aliyuncs.com/google_containers/coredns@sha256:4dd4d0e5bcc9bd0e8189f6fa4d4965ffa81207d8d99d29391f28cbd1a70a0163 registry.aliyuncs.com/google_containers/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:4e51857931144bc6974ecd0e6f07b90da54d678e56d2166d5323a48afbc6eed7 quay.io/coreos/etcd:v3.1.5],SizeBytes:33649054,},ContainerImage{Names:[redis@sha256:e9083e10f5f81d350a3f687d582aefd06e114890b03e7f08a447fa1a1f66d967 redis:3.2-alpine],SizeBytes:22894256,},ContainerImage{Names:[nginx@sha256:db5acc22920799fe387a903437eb89387607e5b3f63cf0f4472ac182d7bad644 nginx:1.12-alpine],SizeBytes:15502679,},ContainerImage{Names:[calico/pod2daemon-flexvol@sha256:42ca53c5e4184ac859f744048e6c3d50b0404b9a73a9c61176428be5026844fe calico/pod2daemon-flexvol:v3.10.1],SizeBytes:9780495,},ContainerImage{Names:[busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084 busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[gcr.azk8s.cn/google_containers/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea registry.aliyuncs.com/google_containers/pause@sha256:759c3f0f6493093a9043cc813092290af69029699ade0e3dbe024e968fcb7cca gcr.azk8s.cn/google_containers/pause:3.1 registry.aliyuncs.com/google_containers/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
I0226 13:37:59.784266       1 factory.go:610] Attempting to bind test-scheduler-6d779d9465-vsvxh to node02
I0226 13:37:59.788509       1 scheduler.go:667] pod default/test-scheduler-6d779d9465-vsvxh is bound successfully on node "node02", 3 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<1>|Memory<3861328Ki>|Pods<110>|StorageEphemeral<39517336Ki>; Allocatable: CPU<1>|Memory<3758928Ki>|Pods<110>|StorageEphemeral<36419176798>.".

从日志中我们得知 Pod 是调用自定义调度器的。

kubectl get pod pod-name -o yaml

apiVersion: v1
kind: Pod
metadata:annotations:cni.projectcalico.org/podIP: 192.168.140.65/32cni.projectcalico.org/podIPs: 192.168.140.65/32creationTimestamp: "2020-02-26T13:37:59Z"generateName: test-scheduler-6d779d9465-labels:app: test-schedulerpod-template-hash: 6d779d9465name: test-scheduler-6d779d9465-vsvxhnamespace: defaultownerReferences:- apiVersion: apps/v1blockOwnerDeletion: truecontroller: truekind: ReplicaSetname: test-scheduler-6d779d9465uid: 0b57ec49-c02e-4f49-903e-d4063a9f4a45resourceVersion: "49454"selfLink: /api/v1/namespaces/default/pods/test-scheduler-6d779d9465-vsvxhuid: 7f511c47-cae3-4520-a1d8-91215da60527
spec:containers:- image: nginximagePullPolicy: IfNotPresentname: nginxports:- containerPort: 80protocol: TCPresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /var/run/secrets/kubernetes.io/serviceaccountname: default-token-hmmcvreadOnly: truednsPolicy: ClusterFirstenableServiceLinks: truenodeName: node02priority: 0restartPolicy: AlwaysschedulerName: sample-schedulersecurityContext: {}serviceAccount: defaultserviceAccountName: defaultterminationGracePeriodSeconds: 30tolerations:- effect: NoExecutekey: node.kubernetes.io/not-readyoperator: ExiststolerationSeconds: 300- effect: NoExecutekey: node.kubernetes.io/unreachableoperator: ExiststolerationSeconds: 300volumes:- name: default-token-hmmcvsecret:defaultMode: 420secretName: default-token-hmmcv
status:conditions:- lastProbeTime: nulllastTransitionTime: "2020-02-26T13:37:59Z"status: "True"type: Initialized- lastProbeTime: nulllastTransitionTime: "2020-02-26T13:38:02Z"status: "True"type: Ready- lastProbeTime: nulllastTransitionTime: "2020-02-26T13:38:02Z"status: "True"type: ContainersReady- lastProbeTime: nulllastTransitionTime: "2020-02-26T13:37:59Z"status: "True"type: PodScheduledcontainerStatuses:- containerID: docker://86e5078b5ede4d6c08803e880061dcb72050b6cbf895615d8eca42f0d064f8a8image: nginx:latestimageID: docker-pullable://nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566lastState: {}name: nginxready: truerestartCount: 0started: truestate:running:startedAt: "2020-02-26T13:38:02Z"hostIP: 192.168.17.132phase: RunningpodIP: 192.168.140.65podIPs:- ip: 192.168.140.65qosClass: BestEffortstartTime: "2020-02-26T13:37:59Z"

在 Kubernetes v1.17 版本中,Scheduler Framework 内置的预选和优选函数已经全部插件化。

注意事项

注意下 go.mod 中的 replace

参考

自定义 Kubernetes 调度器-阳明的博客|Kubernetes|Istio|Prometheus|Python|Golang|云原生

GitHub - cnych/sample-scheduler-framework: This repo is a sample for Kubernetes scheduler framework.

这篇关于调度框架 Scheduling Framework 实践的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/812051

相关文章

Oracle查询优化之高效实现仅查询前10条记录的方法与实践

《Oracle查询优化之高效实现仅查询前10条记录的方法与实践》:本文主要介绍Oracle查询优化之高效实现仅查询前10条记录的相关资料,包括使用ROWNUM、ROW_NUMBER()函数、FET... 目录1. 使用 ROWNUM 查询2. 使用 ROW_NUMBER() 函数3. 使用 FETCH FI

在C#中获取端口号与系统信息的高效实践

《在C#中获取端口号与系统信息的高效实践》在现代软件开发中,尤其是系统管理、运维、监控和性能优化等场景中,了解计算机硬件和网络的状态至关重要,C#作为一种广泛应用的编程语言,提供了丰富的API来帮助开... 目录引言1. 获取端口号信息1.1 获取活动的 TCP 和 UDP 连接说明:应用场景:2. 获取硬

Java内存泄漏问题的排查、优化与最佳实践

《Java内存泄漏问题的排查、优化与最佳实践》在Java开发中,内存泄漏是一个常见且令人头疼的问题,内存泄漏指的是程序在运行过程中,已经不再使用的对象没有被及时释放,从而导致内存占用不断增加,最终... 目录引言1. 什么是内存泄漏?常见的内存泄漏情况2. 如何排查 Java 中的内存泄漏?2.1 使用 J

Linux中Curl参数详解实践应用

《Linux中Curl参数详解实践应用》在现代网络开发和运维工作中,curl命令是一个不可或缺的工具,它是一个利用URL语法在命令行下工作的文件传输工具,支持多种协议,如HTTP、HTTPS、FTP等... 目录引言一、基础请求参数1. -X 或 --request2. -d 或 --data3. -H 或

Docker集成CI/CD的项目实践

《Docker集成CI/CD的项目实践》本文主要介绍了Docker集成CI/CD的项目实践,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学... 目录一、引言1.1 什么是 CI/CD?1.2 docker 在 CI/CD 中的作用二、Docke

MyBatis框架实现一个简单的数据查询操作

《MyBatis框架实现一个简单的数据查询操作》本文介绍了MyBatis框架下进行数据查询操作的详细步骤,括创建实体类、编写SQL标签、配置Mapper、开启驼峰命名映射以及执行SQL语句等,感兴趣的... 基于在前面几章我们已经学习了对MyBATis进行环境配置,并利用SqlSessionFactory核

基于MySQL Binlog的Elasticsearch数据同步实践

一、为什么要做 随着马蜂窝的逐渐发展,我们的业务数据越来越多,单纯使用 MySQL 已经不能满足我们的数据查询需求,例如对于商品、订单等数据的多维度检索。 使用 Elasticsearch 存储业务数据可以很好的解决我们业务中的搜索需求。而数据进行异构存储后,随之而来的就是数据同步的问题。 二、现有方法及问题 对于数据同步,我们目前的解决方案是建立数据中间表。把需要检索的业务数据,统一放到一张M

搭建Kafka+zookeeper集群调度

前言 硬件环境 172.18.0.5        kafkazk1        Kafka+zookeeper                Kafka Broker集群 172.18.0.6        kafkazk2        Kafka+zookeeper                Kafka Broker集群 172.18.0.7        kafkazk3

系统架构师考试学习笔记第三篇——架构设计高级知识(20)通信系统架构设计理论与实践

本章知识考点:         第20课时主要学习通信系统架构设计的理论和工作中的实践。根据新版考试大纲,本课时知识点会涉及案例分析题(25分),而在历年考试中,案例题对该部分内容的考查并不多,虽在综合知识选择题目中经常考查,但分值也不高。本课时内容侧重于对知识点的记忆和理解,按照以往的出题规律,通信系统架构设计基础知识点多来源于教材内的基础网络设备、网络架构和教材外最新时事热点技术。本课时知识

cross-plateform 跨平台应用程序-03-如果只选择一个框架,应该选择哪一个?

跨平台系列 cross-plateform 跨平台应用程序-01-概览 cross-plateform 跨平台应用程序-02-有哪些主流技术栈? cross-plateform 跨平台应用程序-03-如果只选择一个框架,应该选择哪一个? cross-plateform 跨平台应用程序-04-React Native 介绍 cross-plateform 跨平台应用程序-05-Flutte