当前位置: 首页 > 科技观察

快速搭建高可用RabbitMQ集群和HAProxy软加载

时间:2023-03-16 22:13:23 科技观察

RabbitMQ高可用集群架构由2个RabbitMQ磁盘节点和1个RabbitMQ内存节点构成内置集群。之所以使用两个磁盘节点,是为了防止唯一的磁盘节点挂掉后,无法重建队列和交换机。使用HAProxy作为RabbitMQ集群的负载均衡器。为了防止HAProxy单点故障,使用keepalived让两个HAProxy节点一主一备。当应用程序使用VIP(虚拟IP)访问HAProxy服务时,默认会连接到主控(Master)的HAProxy。当主(Master)上的HAProxy发生故障时,VIP会漂移到备份(Backup)上并连接到HAProxy服务上。准备在服务器上安装docker和docker-compose,准备rabbitmq.tar和haproxy.tar的离线镜像。服务器节点可以互相ping通。RabbitMQ集群使用RabbitMQ内置集群。当队列节点崩溃时,持久化队列无法自动连接到其他节点创建队列。非持久队列可以自动连接到一个可用的节点来创建一个队列。我们项目使用的非持久性队列。至少保证两个磁盘节点,否则,当唯一的磁盘节点崩溃时,集群中无法创建队列和交换机等元数据。服务分布192.168.1.213服务器部署RabbitMQDiscNode1。192.168.1.203服务器部署RabbitMQDiscNode2。192.168.1.212服务器部署RabbitMQRAMNode3。创建第一个RabbitMQ节点登录服务器,创建目录/app/mcst/rabbitmq。通过sftp将镜像tar包rabbitmq.tar和服务编排文件mcst-rabbitmq-node1.yaml上传到新建目录。导入镜像$dockerload-irabbitmq.tar$dockerimages#查看是否导入成功查看服务编排文件mcst-rabbitmq-node1.yamlversion:'3'services:rabbitmq:container_name:mcst-rabbitmqimage:rabbitmq:3-managementrestart:alwaysports:-4369:4369-5671:5671-5672:5672-15672:15672-25672:25672environment:-TZ=Asia/Shanghai-RABBITMQ_ERLANG_COOKIE=iweru238roseire-RABBITMQ_DEFAULT_USER=mcst_admin-RABBITMQ_DEFAULT_PASS=mcst_admin_123-RABBITMQ_DEFAULT_VHOST=mcst_vhosthostname:rabbitmq1extra_hosts:-rabbitmq1:192.168.1.213-rabbitmq2:192.168.1.203-rabbitmq3:192.168.1.212volumes:-./data:/var/lib/rabbitmq部署命令$docker-compose-fmcst-rabbitmq-node1.yamlup-d注:三个节点RABBITMQ_ERLANG_COOKIE是一致的。一定要有extra_hosts配置,否则在集群搭建过程中会无法连接到其他rabbitmq节点服务。该节点充当集群根节点。部署第二个RabbitMQ节点的方法同上,将rabbitmq.sh脚本上传到volumes中配置的./rabbitmq.sh路径下。查看mcst-rabbitmq-node2.yamlversion:'3'services:rabbitmq:container_name:mcst-rabbitmqimage:rabbitmq:3-managementrestart:alwaysports:-4369:4369-5671:5671-5672:5672-15672:15672-25672:25672environment:-TZ=Asia/Shanghai-RABBITMQ_ERLANG_COOKIE=iweru238roseire-RABBITMQ_DEFAULT_USER=mcst_admin-RABBITMQ_DEFAULT_PASS=mcst_admin_123-RABBITMQ_DEFAULT_VHOST=mcst_vhosthostname:rabbitmq2extra_hosts:-rabbitmq1:192.168.1.213-rabbitmq2:192.168.1.203-rabbitmq3:192.168.1.212volumes:-./rabbitmq.sh:/home/rabbitmq.sh-./data:/var/lib/rabbitmq部署命令$docker-compose-fmcst-rabbitmq-node2.yamlup-d节点启动后,通过进入rabbitmq2节点的容器命令,执行/home/rabbitmq.sh脚本。如果报权限错误,在容器中执行chmod+x/home/rabbitmq.sh进行授权,然后执行bash/home/rabbitmq.sh将脚本加入集群。进入容器的命令:$dockerexec-itmcst-rabbitmq/bin/bash脚本内容如下(磁盘节点):rabbitmqctlstop_apprabbitmqctlresetrabbitmqctljoin_clusterrabbit@rabbitmq1rabbitmqctlstart_app第三个RabbitMQ节点部署方法同上,见mcst-rabbitmq-node3.yamlversion:'3'bmqs:rabbitmq:container_name:mcst-rabbitmqimage:rabbitmq:3-managementrestart:alwaysports:-4369:4369-5671:5671-5672:5672-15672:15672-25672:25672environment:-TZ=Asia/Shanghai-RABBITMQ_ERLANG_COOKIE=iweru238roseire-RABBITMQ_DEFAULT_USER=mcst_admin-RABBITMQ_DEFAULT_PASS=mcst_admin_123-RABBITMQ_DEFAULT_VHOST=mcst_vhosthostname:rabbitmq3extra_hosts:-rabbitmq1:192.168.1.213-rabbitmq2:192.168.1.203-rabbitmq3:192.168.1.212volumes:-./rabbitmq-ram.sh:/home/rabbitmq-ram.sh-./data:/var/lib/rabbitmq部署命令$docker-compose-fmcst-rabbitmq-node3.yamlup-d启动rabbitmq3节点,启动后进入容器,执行bash/home/rabbitmq-ram.sh脚本将内存节点添加到集群中。脚本内容:rabbitmqctlstop_apprabbitmqctlresetrabbitmqctljoin_cluster--ramrabbit@rabbitmq1rabbitmqctlstart_app在容器内部使用命令查看集群状态:rabbitmqctlcluster_status。Clusterstatusofnoderabbit@rabbitmq1...[{nodes,[{disc,[rabbit@rabbitmq1,rabbit@rabbitmq2]},{ram,[rabbit@rabbitmq3]}]},{running_nodes,[rabbit@rabbitmq2,rabbit@rabbitmq3,rabbit@rabbitmq1]},{cluster_name,<<"rabbit@rabbitmq2">>},{partitions,[]},{alarms,[{rabbit@rabbitmq2,[]},{rabbit@rabbitmq3,[]},{rabbit@rabbitmq1,[]}]}]也可以通过http://192.168.1.213:15672进入管理终端查看集群状态。创建HAProxy负载均衡目录/app/mcst/haproxy,将镜像tar包、haproxy配置文件、docker服务编排文件上传至该目录。导入方法同上。查看服务编排文件内容:version:'3'services:haproxy:container_name:mcst-haproxyimage:haproxy:2.1restart:alwaysports:-8100:8100-15670:5670environment:-TZ=Asia/Shanghaiextra_hosts:-rabbitmq1:192.168.1.213-rabbitmq2:192.168.1.203-rabbitmq3:192.168.1.212volumes:-./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro重点是设置extra_hosts(rabbitmq集群节点ip)和卷(使用定义的配置文件)。haproxy配置文件内容:globallog127.0.0.1local0infomaxconn4096defaultslogglobalmodetcpoptiontcplogretries3optionredispatchmaxconn2000timeoutconnect5stimeoutclient120stimeoutserver120s#sslforrabbitmq#frontendssl_rabbitmq#bind*:5673sslcrt/root/rmqha_proxy/rmqha.pem#modetcp#default_backendrabbitmq#web管理界面listenstatsbind*:8100modehttpstatsenablestatsrealmHaproxy\Statisticsstatsuri/statsauthadmin:admin123#配置负载均衡listenrabbitmqbind*:5670modetcpbalanceroundrobinserverrabbitmq1rabbitmq1:5672checkinter5srise2fall3serverrabbitmq2rabbitmq2:5672checkinter5srise2fall3serverrabbitmq3rabbitmq3:5672checkinter5srise2fall3部署命令$docker-compose-fmcst-haproxy.yamlup-d服务分布情况192.168.1.212服务器部署HAProxyMaster。The192.168.1.203serverdeploysHAProxyBackup.StarttheHAProxyserviceontheabovetwonodesrespectively.LogintothemanagementsideofHAProxytoviewtheclusterstatus:http://192.168.1.212:8100/.UseKeepalivedtoprepareHAProxyformasterandbackuptoapplyforanipaddressonthesameLANastheservicenode.ThisipcannotbeoccupiedandisusedasaVIP(virtualip).InstallKeepalivedGototheofficialwebsiteofKeepalivedtodownloadthelatestversionpackage.Thisinstallationusesversion2.0.20.Thedownloadedfileis:keepalived-2.0.20.tar.gz.上传到服务器并解压tarball。$tar-xfkeepalived-2.0.20.tar.gz检查依赖项$cdkeepalived-2.0.20$./configureKeepalived安装需要以下依赖项gcc、openssl-devel。安装命令$yuminstall-ygcc$yuminstall-yopenssl-devel是因为内网服务器无法使用外网的yum源,所以需要改为使用本地的yum源。上传linux安装光盘镜像到/mnt/iso目录,挂载到/mnt/cdrom目录作为yum的安装源。$mkdir/mnt/iso$mkdir/mnt/cdrom$mv/ftp/rhel-server-7.3-x86_64-dvd.iso/mnt/iso挂载光盘镜像$mount-roloop/mnt/iso/rhel-server-7.3-x86_64-dvd.iso/mnt/cdrom$mv/ftp/myself.repo/etc/yum.repos.d$yumcleanall$yummakecache$yumupdateAttachment:myself.repo文件内容:[base]name=RedHatEnterpriseLinux$releasever-$basearch-Sourcebaseurl=file:///mnt/cdromenable=1gpgcheck=1gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release修改完成后,每次安装都需要一张linux安装盘一个软件包,只需要执行mount命令即可加载光盘ISO文件。$mount-roloop/mnt/iso/rhel-server-7.3-x86_64-dvd.iso/mnt/cdrom这时候用yum安装gcc,openssl-devel就可以了。如果不具备使用本地yum源的条件,那么可以使用yum的downloadonly插件。在一台可以连接外网且系统版本相同的机器上下载需要的依赖包,并在目标内网机器上本地安装。还是建议使用本地yum源安装gcc和openssl-devel,再次执行./configure会报警告。“此版本不支持带IPv6的IPVS。请安装libnl/libnl-3开发库以支持带IPVS的IPv6。”安装如下依赖即可解决$yuminstall-ylibnllibnl-devel安装完成后,再次./configure即可。然后执行make编译,最后执行makeinstall安装。安装完成后,执行keepalived--version,输出版本号表示安装成功。创建Keepalived配置文件创建配置文件/etc/keepalived/keepalived.conf主节点配置:vrrp_scriptchk_haproxy{script"killall-0haproxy"#verifyhaproxy'spidexistanceinterval5#checkevery2secondsweight-2#ifcheckfailed,prioritywillminus2}vrrp_instanceVI_1{#host:MASTER#standbymachine:BACK#实例绑定网卡,使用ipa命令查看网卡号interfaceens192#虚拟路由标识,这个标识是一个数字(1-255),在一个VRRP实例中,主备服务器ID必须相同virtual_router_id51#优先级,数字越大,优先级越高,实例中主服务器的优先级高于备用服务器。priority101#虚拟IP地址,可以有多个,一个virtual_ipaddress{192.168.1.110}track_script{#Scriptsstatewemonitorchk_haproxy}}ens192为网卡名称,ifconfig命令查看服务器网卡,找到本地服务对应的网卡知识产权。virtual_router_id的值要和备份节点上的配置保持一致。killall\-0haproxy命令的意思是如果haproxy服务存在,执行该命令,什么也不会发生。如果服务不存在,命令会报找不到进程haproxy:noprocessfound。#网卡信息ens192:flags=4163mtu1500inet192.168.1.203netmask255.255.255.0broadcast192.168.1.255inet6fe80::250:56ff:fe94:40:therscopeid>elink50:90:9mtu1500qdiscmqstateUPqlen1000link/ether00:50:56:94:c1:79brdff:ff:ff:ff:ff:ffinet192.168.1.212/24brd192.168.1.255scopeglobalens192valid_lftforine19.2foreverprefer16.192valid_lftforine16.16/32scopeglobalens192valid_lftforeverpreferred_lftforeverinet6fe80::250:56ff:fe94:c179/64scopelinkvalid_lftforeverpreferred_lftforever默认在master主机上,停止master主机的haproxy服务,然后用ipaddr查看虚拟ip在哪台机器上。如果漂移到备份主机,则表示双机热备生效。启动master主机的haproxy服务后,ipaddr查看虚拟ip应该漂回master主机。测试服务,使用虚拟ip加上服务端口号访问HAProxy服务。至此,高可用rabbitmq集群和haproxy软载搭建完成。