当前位置: 首页 > Linux

Archlinux-Manjaro安装MariaDBHadoopHive(伪分布式)

时间:2023-04-07 03:26:33 Linux

Hadoop2.x.y(伪分布式)参考官网对应版本文档的单节点设置部分https://hadoop.apache.org/docs/首先你要有ssh和rsync,然后下载bin打包解压,将解压后的根目录添加为环境变量HADOOP_HOME#exampleexportHADOOP_HOME=/home/yzj/Applications/hadoop/hadoop-2.10.0exportPATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinmodifyHADOOP_HOME/etc/hadoop/hadoop-env.sh#设置为你Java安装的根目录exportJAVA_HOME=/usr/lib/jvm/java-8-openjdk#Java路径可以通过whereisjava命令找到修改HADOOP_HOME/etc/hadoop/core-site.xmlfs.defaultFShdfs://localhost:9000修改HADOOP_HOME/etc/hadoop/hdfs-site.xml<配置><属性><名称>dfs.replication<值>1<属性><名称>hadoop.tmp.dir<值>/home/yzj/Applications/hadoop/hadoop-2.10.0/tmp设置ssh启动ssh服务:sudosystemctlstartsshd.service先看看你能不能不用输入密码ssh到localhost:如果sshlocalhost不行,ssh-keygen-trsa-P''-f~/.ssh/id\_rsacat~/.ssh/id\_rsa。pub>>~/.ssh/authorized\_keyschmod0600~/.ssh/authorized\_keys修改HADOOP_HOME/etc/hadoop/mapred-site.xmlmapreduce.framework.nameyarn修改HADOOP_HOME/etc/hadoop/yarn-site.xmlyarn.nodemanager.aux-servicesmapreduce\_shuffle格式化hdfs$HADOOP_HOME/bin/hdfsnamenode-format运行hdfs$HADOOP_HOME/sbin/start-dfs.sh创建用户文件夹$HADOOP_HOME/bin/hdfsdfs-mkdir/user创建用户文件夹$HADOOP_HOME/bin/hdfsdfs-mkdir/user/#对应linux的当前用户,例如。hdfsdfs-mkdir/user/yzj运行yarn$HADOOP_HOME/sbin/start-yarn.sh访问localhost:50070和localhost:8088下的hdfs访问yarn并退出$HADOOP_HOME/sbin/stop-yarn.shMariaDBhttps://wiki.archlinux.org/index.php/MariaDBInstallMariaDBsudopacman-Smariadb安装完成后,执行sudomysql\_install\_db--user=mysql--basedir=/usr--datadir=/var/lib/mysq启动mysql服务sudosystemctlstartmariadb.service执行sudomysql\_secure\_installation在执行sudomysql\_secure\_installation的过程中,需要为root用户设置密码,并保留test数据库,以root用户登录mysqlsudomysql-uroot-p登陆后从none切换到mysql数据库,确保有数据库usemysql;更新权限将*.*上的所有内容授予由“您的密码”标识的“root”@“%”;全部授予*。*到由“您的密码”标识的“root”@“localhost”;刷新刷新权限;退出退出;Hive2.x.yhttps://blog.csdn.net/qq_37106028/article/details/78247727https://blog.csdn.net/Handoking/article/details/81388143下载并解压bin包,设置环境变量#exampleHIVE_HOME=/home/yzj/Applications/hadoop/hive-2.3.6exportPATH=$PATH:$HIVE_HOME/bin创建配置文件cp$HIVE_HOME/conf/hive-env.sh.template$HIVE_HOME/conf/hive-env.shcp$HIVE_HOME/conf/hive-default.xml.template$HIVE_HOME/conf/hive-site.xmlcp$HIVE_HOME/conf/hive-log4j2.properties。template$HIVE_HOME/conf/hive-log4j2.propertiescp$HIVE_HOME/conf/hive-exec-log4j2.properties.template$HIVE_HOME/conf/hive-exec-log4j2.properties修改hive-env.shJAVA_HOME=/usr/lib/jvm/java-8-openjdkHADOOP_HOME=$HADOOP_HOMEHIVE_HOME=$HIVE_HOME修改hive-site.xmljavax.jdo.option.ConnectionURLjdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=truejavax.jdo.option.ConnectionDriverNamecom.mysql.cj.jdbc.Driverjavax.jdo.option.ConnectionUserNamerootjavax.jdo.option.ConnectionPassword</name>自己修改datanucleus.schema.autoCreateAlltruehive.querylog.location设置持久目录hive.exec.local.scratchdir同上hive.downloaded.resources.dir同上复制驱动文件mysql-connector-java-8.x.y.jar到HIVE_HOME/lib目录,下载地址http://central.maven.org/maven2/mysql/mysql-connector-java/删除HADOOP_HOME/share/hadoop/yarn/lib目录下的jlinejar包,替换为jlineHIVE_HOME/lib下的jar包(如果hadoop对应目录下没有jlinejar包可以跳过这一步)登录mysql,运行/home/yzj下hive中最新的hive-schema-x.y.z.mysql.sqlsource/应用程序/hadoop/hive-2.3.6/scripts/metastore/upgrade/mysql/hive-schema-2.3.0.mysql.sql;终端执行sudoschematool-dbTypemysql-initSchemastarthive(先启动hadoop)$HIVE_HOME/bin/hive