数据湖仓一体(四)安装hive
上传安装包到/opt/software目录并解压
[bigdata@node106 software]$ tar -zxvf hive-3.1.3-with-spark-3.3.1.tar.gz -C /opt/services
[bigdata@node106 services]$ mv apache-hive-3.1.3-bin apache-hive-3.1.3
配置环境变量
export HIVE_HOME=/opt/services/apache-hive-3.1.3
export $PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZK_HOME/bin:$KAFKA_HOME/bin:$SEA_HOME/bin:$HIVE_HOME/bin
分发环境变量
[bigdata@node106 bin]$ sudo ./bin/xsync /etc/profile.d/bigdata_env.sh
刷新环境变量,5台机器上执行
[bigdata@node106 ~]$ source /etc/profile
上传mysql驱动包到hive的lib目录下
[bigdata@node106 software]$ cp mysql-connector-java-8.0.18.jar /opt/services/apache-hive-3.1.3/lib/
解决jar包冲突
[bigdata@node106 ~]$ mv $HIVE_HOME/lib/log4j-slf4j-impl-2.17.1.jar $HIVE_HOME/lib/log4j-slf4j-impl-2.17.1.jar.bak
配置hive-site.xml文件
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node106:3306/metastore?useSSL=false&useUnicode=true&characterEncoding=UTF-8&allowPublicKeyRetrieval=true</value>
</property>
<!-- jdbc 连接的 Driver-->
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<!-- jdbc 连接的 username-->
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<!-- jdbc 连接的 password -->
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
<!-- Hive 元数据存储版本的验证 -->
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<!--元数据存储授权-->
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
<!-- Hive 默认在 HDFS 的工作目录 -->
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<!-- 显示表头 -->
<property>
<name>hive.cli.print.header</name>
<value>true</value>
</property>
<!-- 显示当前库 -->
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
</property>
<!-- 配置元数据远程连接地址 -->
<property>
<name>hive.metastore.uris</name>
<value>thrift://node106:9083</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>node106</value>
</property>
<property>
<name>hive.users.in.admin.role</name>
<value>bigdata</value>
</property>
<property>
<name>hive.security.authorization.enabled</name>
<value>false</value>
</property>
<property>
<name>hive.execution.engine</name>
<value>mr</value>
</property>
配置日志文件
[bigdata@node106 conf]$ cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
[bigdata@node106 conf]$ cp hive-log4j2.properties.template hive-log4j2.properties
修改hive-log4j2.properties,添加日志目录
property.hive.log.dir = /opt/services/apache-hive-3.1.3/logs
编辑hive-env.sh
[bigdata@node106 conf]$ cp hive-env.sh.template hive-env.sh
[bigdata@node106 conf]$ vim hive-env.sh
export HADOOP_HEAPSIZE=1024
创建元数据库
[bigdata@node106 conf]$ mysql -uroot -p'123456'
mysql> create database if not exists metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
初始化元数据库
[bigdata@node106 bin]$ schematool -initSchema -dbType mysql -verbose
修改编码集,解决乱码问题
mysql> alter table DBS convert to character set utf8;
mysql> alter table COLUMNS_V2 character set utf8;
mysql> alter table COLUMNS_V2 change COMMENT COMMENT varchar(256) character set utf8;
mysql> alter table TABLE_PARAMS change PARAM_VALUE PARAM_VALUE mediumtext character set utf8;
mysql> alter table PARTITION_KEYS change PKEY_COMMENT PKEY_COMMENT varchar(4000) character set utf8;
mysql> alter table PARTITION_KEYS character set utf8;
编写hive.sh脚本
[bigdata@node106 bin]$ vim hive.sh
#!/bin/bash
echo ==================== 启动hive服务 =========================
echo ==================== 启动metastore服务 ====================
ssh node106 "nohup $HIVE_HOME/bin/hive --service metastore > $HIVE_HOME/logs/metastore.log 2>&1 &"
echo ==================== 启动hiveservice2服务 =================
ssh node106 "nohup $HIVE_HOME/bin/hive --service hiveserver2 > $HIVE_HOME/logs/hiveservice2.log 2>&1 &"
授权hive.sh
[bigdata@node106 bin]$ chmod +x hive.sh
分发hive.sh
[bigdata@node106 bin]$ xsync hive.sh
copy到其他机器
[bigdata@node107 bin]$ scp -r bigdata@node106:/opt/services/apache-hive-3.1.3/ /opt/services/apache-hive-3.1.3/
[bigdata@node108 bin]$ scp -r bigdata@node106:/opt/services/apache-hive-3.1.3/ /opt/services/apache-hive-3.1.3/
启动hive
[bigdata@node106 bin]$ hive.sh start
原文地址:https://blog.csdn.net/mark_wu2000/article/details/140375079
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!