当前位置: 首页 > 科技观察

说说Linux安装Hadoop和Hbase

时间:2023-03-18 19:14:46 科技观察

环境介绍三台CentOS7主机192.168.122.101hdfs1192.168.122.102hdfs2192.168.122.103hdfs3其中hdfs1为主节点,其他为从节点。三台机器的安装配置都是一样的。做ssh免密码认证。如果只需要对master节点hdfs1进行操作,那么只需要信任来自hdfs1的其他节点即可。如果所有三个平台都相互信任,那么在其中任何一个平台上都是一样的。修改内核参数vim/etc/sysctl.confnet.ipv4.tcp_syn_retries=1net.ipv4.tcp_synack_retries=1net.ipv4.tcp_keepalive_time=600net.ipv4.tcp_keepalive_probes=3net.ipv4.tcp_keepalive_intvl=15net.ipv4.tcp.tcp_keepalive_time=tcp_timep2net.ipv4.tcp_max_tw_buckets=65536net.ipv4.tcp_tw_recycle=1net.ipv4.tcp_tw_reuse=1net.ipv4.tcp_max_orphans=32768net.ipv4.tcp_syncookies=1net.ipv4.tcp_max_syn_backlog=16384net.ipv4.tcp_wmem=819213107216777216net.ipv4.tcp_rmem=3276813107216777216net.ipv4.tcp_mem=78643210485761572864net.ipv4.ip_local_port_range=102465000net.ipv4.ip_conntrack_max=65536net.ipv4.netfilter.ip_conntrack_max=65536net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=180net.core.somaxconn=16384net.core.netdev_max_backlog=16384vm.max_map_count=262144Modify句柄限制vim/etc/security/limits.conf*softnoproc655♂*hardnoproc655360*softnofile655360*hardnofile655360设置不同节点上的主机名解析,设置对应的主机名hostnamectlset-hostnamehdfs1addhosts记录也可以使用dns解析,更加灵活。vim/etc/hosts192.168.122.101hdfs1192.168.122.102hdfs2192.168.122.103hdfs3创建用户和目录useraddhadooppasswdhadoopmkdir-p/apps/mkdir-pv/data/hdfs/hadoopmkdir-pv/database/data.hdoofownhadospassword-freesu-hadoopssh-keygenssh-copy-idhadoop@hdfs1ssh-copy-idhadoop@hdfs1ssh-copy-idhadoop@hdfs1ssh-keygen生成密钥的时候,一直回车完成ssh-copy-id的创建,需要的时候输入hadoop密码下载jdk下载地址:www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html需要登录才能下载tarzxvfjdk-8u271-linux-x64.tar.gzmvjdk-8u271-linux-x64/apps/cd/apps/ln-sjdk1.8.0_271jdkcd-ifgrep'#modifybyscript'/etc/profile>>/dev/null2>&1;thenecho"alreadsetJAVA_HOME"elsecp/etc/profile/etc/profile_bak$(date+%Y%m%d%H%M%S)cat>>/etc/profile<fs.defaultFShdfs://hdfs1:9000hadoop.tmp.dir/data/hdfs/hadoop/tmp2.配置namenode和datanodevim/apps/hadoop/etc/hadoop/hdfs-site.xml文件Content后添加dfs.namenode.http-addresshdfs1:50070dfs.namenode.secondary.http-地址hdfs2:50070dfs.namenode.name.dir/data/hdfs/hadoop/namedfs.replication2dfs.datanode.data.dir/data/hdfs/hadoop/datanodedfs.permissionsfalse3.配置环境变量vim/apps/hadoop/etc/hadoop/hadoop-env.sh修改JAVA_HOMEexportJAVA_HOME=/apps/jdk也可以根据4.设置master节点vim/apps/hadoop/etc/hadoop/master为添加主节点,一般使用主机名hdfs15。设置slave节点vim/apps/hadoop/etc/hadoop/slave添加从节点,一般使用主机名hdfs1hdfs2hdfs3来配置hbase1。配置环境变量vim/apps/hbase/conf/hbase-env.sh修改JAVA_HOME变量。exportJAVA_HOME=/apps/jdk也可以根据具体需要设置stack等参数2.设置hadoop和zookeeper等信息vim/apps/hbase/conf/hbase-site.xml添加内容hbase.rootdirhdfs://hdfs1:9000/hbase/hbase_dbhbase.cluster.distributedtruehbase.zookeeper.quorumhdfs1,hdfs2,hdfs3hbase.zookeeper.property.dataDir/data/hdfs/hbase/zookeeperhbase.master.info。port16610vim/apps/hbase/conf/regionservershdfs1hdfs2hdfs3开始测试su-hadoop/apps/hadoop/sbin/start-all.sh/apps/hbase/bin/start-hbase.sh总结说这几个步骤只是简单的部署,具体的应用还需要进行相应的测试和调整

最新推荐
猜你喜欢