一、Kerberos简介Kerberos可以在部署集群时,预先将认证密钥放在一个可靠的节点上。集群运行时,集群中的节点使用密钥进行认证,认证通过的节点可以对外提供服务。试图冒充的节点无法与集群内部的节点进行通信,因为它们没有事先获得的密钥信息。这样就避免了Hadoop集群被恶意使用或篡改的问题,保证了Hadoop集群的可靠性和安全性。名词解释AS(AuthenticationServer):认证服务器KDC(KeyDistributionCenter):密钥分发中心TGT(TicketGrantingTicket):票证授权票,票证TGS(TicketGrantingServer):票证授权服务器SS(ServiceServer):具体服务提供者Principal:经过身份验证的个人Ticket:票据,供客户端用来证明身份的真实性。包含:用户名、IP、时间戳、有效期、会话密钥。2.安装zookeeper(以下操作以centos8为例)#下载二进制压缩包wgethttps://archive.apache.org/dist/zookeeper/zookeeper-3.6.3/zookeeper-3.6.3.tar.gz#解压tar-zxvfzookeeper-3.6.3.tar.gz#进入zookeeper根目录,然后将conf下的文件zoo_sample.cfg重命名为zoo.cfg#进入bin目录上层目录启动./bin/zk服务器。shstart#查看是否启动成功ps-aux|grep'zookeeper-3.6.3'或ps-ef|grep'zookeeper-3.6.3'3.安装并启动kafka#Downloadwgethttp://mirrors.hust.edu。cn/apache/kafka/2.8.0/kafka_2.12-2.8.0.tgz#解压tar-zxvfkafka_2.12-2.8.0.tgz#重命名mvkafka_2.12-2.8.0kafka-2.8.0#修改kafka根目录下的config/server.properties配置文件#添加如下配置port=9092#内网(代表本机ip)host.name=10.206.0.17#外网(云服务器ip,如果有需要配置,no去掉此项)advertised.host.name=119.45.174.249#启动kafka(-daemon表示后台启动,在bin上面的目录启动)./bin/kafka-server-start.sh-daemonconfig/server.properties#关闭kafka./bin/kafka-server-stop.sh#Createtopic./bin/kafka-topics.sh--create--replication-factor1--partitions1--topictest--zookeeperlocalhost:2181/kafka#创建生产者./bin/kafka-console-producer.sh--broker-listlocalhost:9092--topictest#创建consumer./bin/kafka-console-consumer.sh--bootstrap-serverlocalhost:9092--topictest--from-beginning#如果要与外界连接,需要关闭防火墙#查看防火墙状态systemctlstatusfirewalld.service#关闭防火墙systemctlstopfirewalld.service#启动防火墙systemctlstartfirewalld.service#防火墙自动重启systemctlenablefirewalld.service4.配置并安装Kerberos1。安装Kerberos相关软件yuminstallkrb5-serverkrb5-libskrb5-workstation-y2。配置kdc.confvim/var/kerberos/krb5kdc/kdc.conf内容如下:[kdcdefaults]kdc_ports=88kdc_tcp_ports=88[realms]HADOOP.COM={#master_key_type=aes256-ctsacl_file=/var/kerberos/krb5kdc/kadm5.acldict_file=/usr/share/dict/wordsadmin_keytab=/var/kerberos/krb5kdc/kadm5.keytabmax_renewable_life=7dsupported_enctypes=aes128-cts:normaldes3-hmac-sha1:normalarcfour-hmac:normalcamellia256-cts:normalcamellia128-dess:normal-hmac-sha1:normaldes-cbc-md5:normaldes-cbc-crc:normal}HADOOP.COM:realms的名字随意设置。Kerberos可以支持多个realm,一般使用大写的masterkeytype,supported_enctypes默认使用aes256-cts。由于JAVA使用的是aes256-######cts认证方式,需要额外安装jar包,所以这里没有使用acl_file:标记为admin的用户权限。admin_keytab:KDC验证的keytabsupported_enctypes:支持的验证方式。注意去掉aes256-cts3.配置krb5.confvim/etc/krb5.conf如下:#配置片段也可以放在这个目录includedir/etc/krb5.conf.d/[logging]default=FILE:/var/log/krb5libs.logkdc=FILE:/var/log/krb5kdc.logadmin_server=FILE:/var/log/kadmind.log[libdefaults]default_realm=HADOOP.COMdns_lookup_realm=falsedns_lookup_kdc=falseticket_lifetime=24hrenew_lifetime=7dforwardable=trueclockskew=120udp_preference_limit=1[realms]HADOOP.COM={kdc=es1//更改为服务器主机名或IP地址admin_server=es1//更改为服务器主机名或IP地址}[domain_realm].hadoop.com=HADOOP.COMhadoop.com=HADOOP.COM4,??初始化kerberos数据库kdb5_utilcreate-s-rHADOOP.COM5,修改数据库管理员ACL权限vim/var/kerberos/krb5kdc/kadm5.acl#修改如下*/admin@HADOOP.COM6.启动kerberos守护进程#Startsystemctlstartkadminkrb5kdc#设置systemctl在开机时自动启动。es1addprinckafka/es1#退出kadmin.localexit8,Kafka集成了Kerberos1。生成用户keytab(此文件需要用于后续的kafka配置和客户端认证)kadmin.local-q"xst-k/var/kerberos/krb5kdc/kadm5.keytabkafka/es1@HADOOP.COM"2.创建kafka-kafka安装目录下config中的jaas.conf#内容如下kafka/es1@HADOOP.COM";};KafkaClient{com.sun.security.auth.module.Krb5LoginModulerequireduseKeyTab=trueuseTicketCache=truekeyTab="/var/kerberos/krb5kdc/kadm5.keytab"principal="kafka/es1@HADOOP.COM";};3.在config/server.propertiesadvertised.listeners=SASL_PLAINTEXT://es1:9092#correspondinghostnamelisteners=SASL_PLAINTEXT://es1:9092#correspondinghostnamesecurity.inter添加如下配置。broker.protocol=SASL_PLAINTEXTsasl.mechanism.inter.broker.protocol=GSSAPIsasl.enabled.mechanisms=GSSAPIsasl.kerberos.service.name=kafka4。在bin/kafka-run-class.sh脚本中添加kafkajvm参数(“<”注意去掉“>”,如果原参数中的内容与要添加的相同,只保留一份)#jvm性能选项if[-z"$KAFKAJVMPERFORMANCEOPTS"];然后KAFKAJVMPERFORMANCEOPTS="-server-XX:+UseG1GC-XX:MaxGCPauseMillis=20-XX:InitiatingHeapOccupancyPercent=35-XX:+ExplicitGCInvokesConcurrent<-Djava.awt.headless=true-Djava.security.krb5.conf=/etc/krb5.conf-Djava.security.auth.login.config=/home/xingtongping/kafka_2.12-2.2.0/config/kafka-jaas.conf>"fi5.配置config/producer.properties,kafkaproducerkerberos配置security.protocol=SASL_PLAINTEXTsasl.mechanism=GSSAPIsasl.kerberos.service.name=kafka6。配置config/consumer.properties,kafkaconsumerkerberos配置security.protocol=SASL_PLAINTEXTsasl.mechanism=GSSAPIsasl.kerberos.service.name=kafka五、linux测试命令1.启动kafka服务./bin/kafka-server-start.sh配置/server.properties2。创建生产者./bin/kafka-console-producer.sh--broker-listes1:9092--topictest--producer.configconfig/producer.properties3。创建消费者./bin/kafka-console-consumer.sh--bootstrap-serveres1:9092--topictest--consumer.configconfig/consumer.properties六、本地环境java客户端消费测试代码1.结合krb5.conf和kafka-jaas.c下载onf到本地,并修改kafka-jaas.conf文件的keytab路径:改成本地keytab路径2、使用域名时记得修改hosts文件,添加内容:172.16.110.173es13。添加以下代码进行测试");//验证码System.setProperty("java.security.auth.login.config","D:\zookeeper\kerberos\bd\kafka-jaas.conf");//验证码属性props=newProperties();//集群地址,多个地址使用","分隔props.put("bootstrap.servers","es1:9092");//主机名或Ipprops.put("sasl.kerberos.service.name","卡夫卡");//认证码props.put("sasl.mechanism","GSSAPI");//验证码props.put("security.protocol","SASL_PLAINTEXT");//验证码props.put("group.id","1");props.put("enable.auto.commit","true");props.put("auto.commit.interval.ms","1000");props.put("session.timeout.ms","30000");props.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");//创建消费者KafkaConsumer
