本文主要是介绍hive学习1:hive1.2.1版本安装,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
Hive只在一个节点上安装即可1.上传tar包2.解压tar -zxvf hive-1.2.1.tar.gz -C /usr/localmv hive-1.2.1 hive
3.安装mysql数据库(切换到root用户)(装在哪里没有限制,只有能联通hadoop集群的节点)mysql安装仅供参考,不同版本mysql有各自的安装流程rpm -qa | grep mysqlrpm -e mysql-libs-5.1.66-2.el6_3.i686 --nodepsrpm -ivh MySQL-server-5.1.73-1.glibc23.i386.rpm rpm -ivh MySQL-client-5.1.73-1.glibc23.i386.rpm 修改mysql的密码/usr/bin/mysql_secure_installation(注意:删除匿名用户,允许用户远程连接)登陆mysqlmysql -u root -p4.配置hive(a)配置HIVE_HOME环境变量 vi conf/hive-env.sh 配置其中的$hadoop_home(b)配置元数据库信息 vi hive-site.xml 添加如下内容:
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.31.11:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property><property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property><property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property><property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>
</configuration>5.安装hive和mysq完成后,将mysql的连接jar包拷贝到$HIVE_HOME/lib目录下如果出现没有权限的问题,在mysql授权(在安装mysql的机器上执行)mysql -uroot -p#(执行下面的语句 *.*:所有库下的所有表 %:任何IP地址或主机都可以连接)GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION;FLUSH PRIVILEGES;6. Jline包版本不一致的问题,需要拷贝hive的lib目录中jline.2.12.jar的jar包替换掉hadoop中的
/home/hadoop/app/hadoop-2.6.4/share/hadoop/yarn/lib/jline-0.9.94.jar启动hive
bin/hive//如果后面不想进入hive/bin下启动hive可以设置到环境变量里
修改环境变量/etc/profile:
vim /etc/profile
在文件末尾添加以下内容#hiveexport HIVE_HOME=/usr/local/hiveexport PATH=$PATH:$HIVE_HOME/bin执行立即生效
source /etc/profile
----------------------------------------------------------------------------------------------------6.建表(默认是内部表)create table trade_detail(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t';建分区表create table td_part(id bigint, account string, income double, expenses double, time string) partitioned by (logdate string) row format delimited fields terminated by '\t';建外部表create external table td_ext(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t' location '/td_ext';7.创建分区表普通表和分区表区别:有大量数据增加的需要建分区表create table book (id bigint, name string) partitioned by (pubdate string) row format delimited fields terminated by '\t'; 分区表加载数据load data local inpath './book.txt' overwrite into table book partition (pubdate='2010-08-22');load data local inpath '/root/data.am' into table beauty partition (nation="USA");select nation, avg(size) from beauties group by nation order by avg(size);
用beelinee 连接hive服务:
jdbc:hive2://localhost:10000 连接命令。
beeline> !connect jdbc:hive2://localhost:10000 //连接hive服务命令
Connecting to jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: hadoop //默认是机器的当前登录用户名
Enter password for jdbc:hive2://localhost:10000: //默认为空
Connected to: Apache Hive (version 1.2.1)
启动成功后,可以在别的节点上用beeline去连接
- 方式(1)
hive/bin/beeline 回车,进入beeline的命令界面
输入命令连接hiveserver2
beeline> !connect jdbc:hive2//hadoop01:10000
(hadoop01是hiveserver2所启动的那台主机名,端口默认是10000)
- 方式(2)
或者启动就连接:
bin/beeline -u jdbc:hive2://mini1:10000 -n hadoop
接下来就可以做正常sql查询了
这篇关于hive学习1:hive1.2.1版本安装的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!