前言Ingress可以理解为ServiceofService,即在现有Service的前面再构建一层Service,作为对外流量的统一入口,转发请求路由。说白了,就是在前端搭建一个nginx或者haproxy,将不同??的host或者url转发到对应的后端Service,然后将service传递给Pod。只是ingress对nginx/haproxy进行了解耦和抽象。更新历史20200628-初稿-左成礼原文地址-https://blog.zuolinux.com/2020/06/28/about-ingress-controller.htmlIngressingress的含义弥补了默认Service时的一些缺陷暴露外网访问,如果不能在入口统一7层URL规则,比如默认一个Service只能对应一个后端服务。一般来说,ingress包括两个部分:ingress-controller和ingressobject。ingress-controller对应nginx/haproxy程序,作为Pod运行。ingress对象对应nginx/haproxy配置文件。ingress-controller使用ingress对象中描述的信息修改自己Pod中nginx/haproxy的规则。部署ingress,准备测试资源部署2个服务,访问服务1,返回Version1,访问服务2,返回Version2两个服务的程序配置#catdeployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:name:hello-v1.0spec:选择器:matchLabels:app:v1.0replicas:3template:metadata:labels:app:v1.0spec:containers:-name:hello-v1image:anjia0532/google-samples.hello-app:1.0端口:-containerPort:8080---apiVersion:apps/v1kind:Deploymentmetadata:名称:hello-v2.0spec:选择器:matchLabels:app:v2.0副本:3模板:元数据:标签:app:v2.0规范:容器:-名称:hello-v2图片:anjia0532/google-samples.hello-app:2.0端口:-containerPort:8080---apiVersion:v1kind:Servicemetadata:名称:service-v1spec:选择器:app:v1.0端口:-端口:8081targetPort:8080协议:TCP---apiVersion:v1kind:Servicemetadata:名称:service-v2spec:选择器:app:v2.0ports:-port:8081targetPort:8080protocol:TCP让容器运行在8080上,服务运行在8081上启动两个服务和对应的Pod#kubectlapply-fdeployment.yamldeployment.apps/hello-v1.0createddeployment.apps/hello-v2.0createdservice/service-v1createdservice/service-v2created检查启动状态,每个服务对应3个Pods#kubectlgetpod,service-owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESpod/hello-v1.0-6594bd8499-lt6nn1/1Running037s192.10.205.234work01<无>pod/hello-v1.0-6594bd8499-q58cw1/1运行037s192.10.137.190work03<无><无>pod/hello-v1.0-6594bd8499-zcmf41/1运行037s192.10.137.189work03<无><无>pod/hello-v2.0-6bd99fb9cd-9wr651/1运行037s192.10.75.89work02<无><无>pod/hello-v2.0-6bd99fb9cd-pnhr81/1运行037s192.10.75.91世界k02pod/hello-v2.0-6bd99fb9cd-sx9491/1运行037s192.10.205.236work01NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGESELECTORservice/service-v1ClusterIP192.20.92.2218081/TCP37sapp=v1.0service/service-v2ClusterIP192.20.255.08081/TCP36sapp=v2.0查看Service后端Pod挂载状态[root@master01~]#kubectlgetepservice-v1NAMEENDPOINTSAGEservice-v1192.10.137.189:8080,192.10.137.190:8080,192.10.205.234:8080113s[root@master01~]#kubectlgetepservice-v21NAMEEND-POINTSA205.236192.10.75.89:8080,192.10.75.91:8080113s可以看到两个服务都成功挂载了对应的Pod,并部署了前端ingress-controller。首先指定work01/work02两台服务器运行ingress-controllerkubectllabelnodeswork01ingress-ready=truekubectllabelnodeswork02ingress-ready=trueingress-controller使用nginx官方版本wget-Oingress-controller.yamlhttps://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml修改为启动两个ingress-controllers#vimingress-controller.yamlapiVersion:apps/v1kind:Deployment。...........revisionHistoryLimit:10replicas:2#添加这一行并将其更改为国内镜像#vimingress-controller.yamlspec:dnsPolicy:ClusterFirstcontainers:-name:controller#image:us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1@sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20image:registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.32.0imagePullPolicy:IfNotPresent部署ingress-controllerkubectlapply-fingress-controller.yaml查看运行状态#kubectlgetpod,service-ningress-nginx-owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESpod/ingress-nginx-admission-create-ld4nt0/1Completed015m192.10.137.188work03<无><无>pod/ingress-nginx-admission-patch-p5jmd0/1已完成115m192.10.75.85work02<无><无>pod/ingress-nginx-controller-75f89c4965-vxt4d1/1运行015m192.10.205.233work01<无><无>pod/ingress-nginx-controller-75f89c4965-zmjg21/1运行015m192.10.75.87work02<无><无>名称类型集群-IP外部IP端口年龄选择器服务/入口-nginx-控制器NodePort192.20.105.10192.168.10.17,192.168.10.1780:30698/TCP,443:31303/TCP15mapp.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginxservice/ingress-nginx-controller-admissionClusterIP192.20.80.208<无>443/TCP15mapp.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx可以看到ingress-nginx-controllerPod运行在work01/02写入访问请求转发规则#catingress.yamlapiVersion:networking.k8s.io/v1beta1kind:Ingressmetadata:name:nginx-ingressspec:rules:-host:test-v1.comhttp:paths:-path:/backend:serviceName:service-v1servicePort:8081-host:test-v2.comhttp:paths:-path:/backend:serviceName:service-v2servicePort:8081启用规则#kubectlapply-fingress.yamlingress.networking.k8s.io/nginx-ingress创建,可以看到ingress-controllerPod中的nginx配置已经生效grep-A30test-v1.comserver{server_nametest-v1.com;听80;听443sslhttp2;设置$proxy_upstream_name"-";ssl_certificate_by_lua_block{certificate.call()}location/{设置$namespace"default";设置$ingress_name"nginx-ingress";设置$service_name"service-v1";设置$service_port"8081";设置$location_path"/";我们先访问集群外的test解析域名为work01#cat/etc/hosts192.168.10.15test-v1.com192.168.10.15test-v2.com访问test#curltest-v1.com你好,世界!版本:1.0.0主机名:hello-v1。0-6594bd8499-svjnf#curltest-v1.com你好,世界!版本:1.0.0主机名:hello-v1.0-6594bd8499-zqjtm#curltest-v1.com你好,世界!版本:1.0.0主机名:你好-v1。0-6594bd8499-www76#curltest-v2.com你好,世界!版本:2.0.0主机名:hello-v2.0-6bd99fb9cd-h8862#curltest-v2.com你好,世界!版本:2.0.0主机名:hello-v2。0-6bd99fb9cd-sn84j可以看到不同域名的请求到正确Service下的不同Pod。然后请求work02#cat/etc/hosts192.168.10.16test-v1.com192.168.10.16test-v2.com#curltest-v1.comHello,world!Version:1.0.0Hostname:hello-v1.0-6594bd8499-www76#curltest-v1.com你好,世界!版本:1.0.0主机名:hello-v1.0-6594bd8499-zqjtm#curltest-v2.com你好,世界!版本:2.0.0主机名:hello-v2.0-6bd99fb9cd-sn84j#curltest-v2.comHello,world!Version:2.0.0Hostname:hello-v2.0-6bd99fb9cd-h8862也可以。如何做到高可用。在work01/work02前面再挂两个LVS+keepalived,可以实现对work01/02的高可用访问。也可以使用keepalived直接在work01/work02上浮动一个VIP,不需要额外的机器,节省成本。结语本文使用Deployment+NodePortService来部署Ingress。使用Deployment管理ingress-controllerPod,使用NodePort暴露ingressService。查看入口服务#kubectlgetservice-owide-ningress-nginxNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGESELECTORingress-nginx-controllerNodePort192.20.105.10192.168.10.17,192.168.10.1780:34698/TCP:31303/TCP22mapp.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx可以看到对外暴露了30698端口,访问任意节点你可以在30698端口访问v1/v2版本的Pod。但是这个端口是随机的,重建后会改变,我们可以直接访问运行ingress-controllerPod的work01/02的80端口。在work01/02前面,设置了一套LVS+keepalived,用于高可用负载。在work01/02上使用iptables-tnat-L-n-v可以看到80端口是通过NAT方式开放的,高流量会造成瓶颈。您可以使用DaemonSet+HostNetwork来部署入口控制器。这样work01/02上暴露的80端口直接使用宿主机的网络,不做NAT映射,可以避免性能问题。联系我微信公众号:zuolinux_com