测试问题描述我现在集群中有3台主机,分别是163、192、193。现在假设193有问题,服务容器不会运行在193节点上。[root@sit-cdpapp-163l~]#dockernodelsIDHOSTNAMESTATUSAVAILABILITYMANAGERSTATUSENGINEVERSIONohjdhdzdiebk6ibjoriygfszumetabase193ReadyActiveReachable19.03.5tkoaoslrj2y526qugt3kwzfjopostgresql192ReadyActiveReachable19.03.5t8ixy0wk2mf1owvcsgu6bvt6b*sit-cdpapp-163lReadyActiveLeader19.03.5解决方案标记所有3个节点以区分--163、192使用相同的标签[root@sit-cdpapp-163l~]#dockernodeupdate--label-addtag=163sit-cdpapp-163lsit-cdpapp-163l[root@postgresql192certs.d]#dockernodeupdate--label-addtag=163postgresql192postgresql192[root@metabase193~]#dockernodeupdate--label-addtag=193metabase193metabase193注意“tag=163”,tag可以是任何字段[root@sit-cdpapp-163l~]#dockerservicecreate--nameworker--hostnameworker--constraint'node.labels.tag==163'--networksk-netregistry.xxxx.com/sk_pbms_worker_dev:alpha3.4.X85mgnxhx5m3nwod5any3za1dj总体进度:1outof1tasks1/1:running[================================================>]验证:服务收敛[root@postgresql192certs.d]#dockerservicescaleworker=3[root@sit-cdpapp-163l~]#dockerservicepsworkerIDNAMEIMAGENODEDESIREDSTATECURRENTSTATEERRORPORTS8xxlea1infywworker.1registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xpostgresql192正在运行5分钟前正在运行92upe13xos13\_worker.1registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xpostgresql1925分钟前关闭失败“没有这样的容器:worker.1.9...”ue5vpn47vl9x\_worker.1registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xsit-cdpapp-163l关闭关闭3分钟前nu2w0uzulac3worker.2registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xpostgresql192运行分钟运行4agohx5fjj9nilbbworker.3registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xsit-cdpapp-163lRunningRunning21秒前Test2测试更新替换指定服务容器运行节点[root@sit-cdpapp-163l~]#dockerserviceupdate--constraint-add'node.labels.tag==193'workerworker总体进度:3个任务中的0个1/3:没有合适的节点(3个节点上不满足调度约束)2/3:3/3:^合作在后台继续进行。使用`dockerservicepsworker`检查进度。如果创建时指定了服务约束,现在想换地方,更新时必须同时执行constraint-rm和constraint-add,否则会崩溃。还好是后台,还有先启动后停止战略[root@sit-cdpapp-163l~]#dockerserviceupdate--constraint-rm'node.labels.tag==163'workerworker整体进度:3outof3tasks1/3:running[================================================>]2/3:运行[==================================================>]3/3:运行[==================================================>]verify:Serviceconverged[root@sit-cdpapp-163l~]#dockerserviceupdate--constraint-add'node.labels.tag==193'workerworker整体进度:3个任务中的3个1/3:运行[==================================================>]2/3:运行[==================================================>]3/3:运行[================================================>]verify:Serviceconverged运行结果如下[root@postgresql192certs.d]#dockerservicepsworkerIDNAME图像节点DESIRED状态当前状态错误端口pqe7ffne9sx1worker.1registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xme??tabase193RunningRunning28分钟前j5zoqbqi837s\_worker.1registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.XShutdownPending34分钟前“没有合适的节点(调度......”zmleg7w6wiqs\_worker.1registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xsit-cdpapp-163lShutdown关机34分钟前qxga9yumzlteworker.2registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xme??tabase193正在运行正在运行27分钟前vevxy0pahkpp\_worker.2registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xpostgresql192Shutdown关机27分钟前nty6x648eixrworker.3registry.xxxx.com/sk_pbms_worker_dev:alpha3.4.Xme??tabase193RunningRunning27分钟前z18epvm2o2wf\_worker.3registry.xxxx.com/sk_pbms_worker_dev4.alpha927分钟前补充注意没有排除--constraint在有人创建时使用未标记的标签。如果是这种情况,服务会显示running,当前状态会显示pending,没有分配节点。此时,如果在createlabel时有任何节点label被设置为unmarked,容器将被瞬时部署到该label
