Hadoop環(huán)境配置之hive環(huán)境配置詳解
1、將下載的hive壓縮包拉到/opt/software/文件夾下
安裝包版本:apache-hive-3.1.2-bin.tar.gz

2、將安裝包解壓到/opt/module/文件夾中,命令:
cd /opt/software/ tar -zxvf 壓縮包名 -C /opt/module/
3、修改系統(tǒng)環(huán)境變量,命令:
vi /etc/profile
?在編輯面板中添加如下代碼:
export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin export PATH=$PATH:$HADOOP_HOME/sbin:$HIVE_HOME/bin

4、重啟環(huán)境配置,命令:
source /etc/profile
5、修改hive環(huán)境變量
cd /opt/module/apache-hive-3.1.2-bin/bin/
①配置hive-config.sh文件
vi hive-config.sh
在編輯面板中添加如下代碼:
export JAVA_HOME=/opt/module/jdk1.8.0_212 export HIVE_HOME=/opt/module/apache-hive-3.1.2-bin export HADOOP_HOME=/opt/module/hadoop-3.2.0 export HIVE_CONF_DIR=/opt/module/apache-hive-3.1.2-bin/conf

6、拷貝hive配置文件,命令:
cd /opt/module/apache-hive-3.1.2-bin/conf/ cp hive-default.xml.template hive-site.xml
7、修改hive配置文件,找到對應位置按一下代碼進行修改:
vi hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
# 自定義密碼
<description>password to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.1.100:3306/hive?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=GMT</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
<description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<description>
Enforce metastore schema version consistency.
True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
proper metastore schema migration. (Default)
False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/opt/module/apache-hive-3.1.2-bin/tmp/${user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>system:java.io.tmpdir</name>
<value>/opt/module/apache-hive-3.1.2-bin/iotmp</value>
<description/>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/opt/module/apache-hive-3.1.2-bin/tmp/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/opt/module/apache-hive-3.1.2-bin/tmp/${system:user.name}/operation_logs</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
<property>
<name>hive.metastore.db.type</name>
<value>mysql</value>
<description>
Expects one of [derby, oracle, mysql, mssql, postgres].
Type of database used by the metastore. Information schema & JDBCStorageHandler depend on it.
</description>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
<description>Whether to include the current database in the Hive prompt.</description>
</property>
<property>
<name>hive.cli.print.header</name>
<value>true</value>
<description>Whether to print the names of the columns in query output.</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/opt/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
</configuration>
8、上傳mysql驅(qū)動包到/opt/module/apache-hive-3.1.2-bin/lib/文件夾下
驅(qū)動包:mysql-connector-java-8.0.15.zip,解壓后從里面獲取jar包
9、進入數(shù)據(jù)庫,在數(shù)據(jù)庫中新建名為hive的數(shù)據(jù)庫,確保 mysql數(shù)據(jù)庫中有名稱為hive的數(shù)據(jù)庫
mysql> create database hive;
10、初始化元數(shù)據(jù)庫,命令:
schematool -dbType mysql -initSchema
11、群起,命令:
start-all.sh Hadoop100上 start-yarn.sh Hadoop101上
12、啟動hive,命令:
hive
13、檢測是否啟動成功,命令:
show databases;
出現(xiàn)各類數(shù)據(jù)庫,則啟動成功
到此這篇關于Hadoop環(huán)境配置之hive環(huán)境配置的文章就介紹到這了,更多相關Hadoop?hive環(huán)境配置內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
SpringBoot的Security和OAuth2的使用示例小結(jié)
這篇文章主要介紹了SpringBoot的Security和OAuth2的使用,本文通過示例圖文相結(jié)合給大家講解的非常詳細,感興趣的朋友跟隨小編一起看看吧2024-06-06
基于Gradle搭建Spring?5.3.13-release源碼閱讀環(huán)境的詳細流程
這篇文章主要介紹了基于Gradle搭建Spring?5.3.13-release源碼閱讀環(huán)境,首先安裝jdk、gradle等一系列必要操作,本文通過實例代碼相結(jié)合給大家講解的非常詳細,需要的朋友可以參考下2022-04-04
SpringBoot使用Redis對用戶IP進行接口限流的項目實踐
本文主要介紹了SpringBoot使用Redis對用戶IP進行接口限流,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友們下面隨著小編來一起學習學習吧2023-07-07

