本文主要是介绍用sqoop实现mysql和hive数据互导,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
在此测试的是伪分布式
hadoop1.2.1
sqoop-1.4.4.bin__hadoop-1.0.0
hive-0.12.0
1、数据准备
1.1建立测试用户sqoop
注意:这一步不是必须,也可以在mysql的root用户下进行grant all privileges on *.* to 'sqoop'@'%' identified by 'sqoop' with grant option;
1.2在sqoop用户下,创建sqoop数据库,在sqoop库中建表employee
create database sqoop;use sqoop;create table employee(employee_id int not null primary key, employee_name varchar(30)); insert into employee values(101,'zhangsan'); insert into employee values(102,'lisi'); insert into employee values(103,'wangwu');
此时,在sqoop库中的employee表中已经存在三条记录,可以select查询验证。
2、将Mysql中的sqoop用户下的sqoop库中的employee表中的数据导入hive中
2.1mysql中的employee表导入hive
sqoop import --connect jdbc:mysql://localhost:3306/sqoop --username sqoop --password sqoop --table employee --hive-import -m 1
在HDFS的/user/hive/warehouse目录下查看导入的结果cwjy1202@cwjy1202-Lenovo-IdeaPad-Y471A:~$ hadoop dfs -ls /user/hive/warehouse/employee/ Warning: $HADOOP_HOME is deprecated.Found 2 items -rw-r--r-- 1 cwjy1202 supergroup 0 2014-06-04 14:38 /user/hive/warehouse/employee/_SUCCESS -rw-r--r-- 1 cwjy1202 supergroup 33 2014-06-04 14:38 /user/hive/warehouse/employee/part-m-00000
cwjy1202@cwjy1202-Lenovo-IdeaPad-Y471A:~$ hadoop dfs -cat /user/hive/warehouse/employee/part-m-00000 Warning: $HADOOP_HOME is deprecated.101zhangsan 102lisi 103wangwu
2.2去掉导入语句的参数 -m 1
显示的结果如下:sqoop import --connect jdbc:mysql://localhost:3306/sqoop --username sqoop --password sqoop --table employee --hive-import
三条记录分别存在part-m-00000、part-m-00001和part-m-00002中Found 4 items -rw-r--r-- 1 cwjy1202 supergroup 0 2014-06-04 14:10 /user/hive/warehouse/employee/_SUCCESS -rw-r--r-- 1 cwjy1202 supergroup 13 2014-06-04 14:10 /user/hive/warehouse/employee/part-m-00000 -rw-r--r-- 1 cwjy1202 supergroup 9 2014-06-04 14:10 /user/hive/warehouse/employee/part-m-00001 -rw-r--r-- 1 cwjy1202 supergroup 11 2014-06-04 14:10 /user/hive/warehouse/employee/part-m-00002
3、hive的数据导入mysql
3.1把mysql中的employee表清空
在linux终端输入:
删除成功!!!mysql -usqoop -p回车 密码truncate table sqoop.employee; mysql> select * from sqoop.employee; Empty set (0.00 sec)
3.2从hive到mysql
在linux终端输入:
sqoop export --connect jdbc:mysql://localhost:3306/sqoop -username sqoop -password sqoop --table employee --export-dir /user/hive/warehouse/employee --input-fields-terminated-by '\001'
说明:jdbc:mysql://localhost:3306/sqoop 3306是mysql的端口;localhost是mysql的安装ip地址,运行时根据自己的情况而定;sqoop是mysql中的数据库。
hive中字段的默认分割符为'\001'
运行过程的最后显示Export 3 records如下:
14/06/04 14:49:45 INFO mapreduce.ExportJobBase: Transferred 760 bytes in 11.8562 seconds (64.1017 bytes/sec) 14/06/04 14:49:45 INFO mapreduce.ExportJobBase: Exported 3 records.
查询mysql中的employee验证数据是否存在:mysql> select * from sqoop.employee; +-------------+---------------+ | employee_id | employee_name | +-------------+---------------+ | 101 | zhangsan | | 102 | lisi | | 103 | wangwu | +-------------+---------------+ 3 rows in set (0.00 sec)
导入mysql成功!!!
额外说明:
因为mysql中的employee表的结构已经确定,employee_id为primary key,所以多次从hive导入相同的数据,只会第一成功。
操作如下,再一次在linux终端输入:
sqoop export --connect jdbc:mysql://localhost:3306/sqoop -username sqoop -password sqoop --table employee --export-dir /user/hive/warehouse/employee --input-fields-terminated-by '\001'
运行过程显示:
14/06/04 15:14:34 INFO mapred.JobClient: Task Id : attempt_201406041140_0012_m_000001_1, Status : <span style="color:#FF0000;"><strong>FAILED Task</strong></span> attempt_201406041140_0012_m_000001_1 failed to report status for 600 seconds. Killing! 14/06/04 15:14:36 INFO mapred.JobClient: map 25% reduce 0% 14/06/04 15:14:37 INFO mapred.JobClient: Task Id : attempt_201406041140_0012_m_000002_1, Status : <strong style="background-color: rgb(255, 0, 0);">FAILED<span style="color:#FF0000;background-color: rgb(255, 255, 255);"> Task at</span></strong>tempt_201406041140_0012_m_000002_1 failed to report status for 600 seconds. Killing!
这篇关于用sqoop实现mysql和hive数据互导的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!