hadoop从入门到放弃(一)之flume获取数据存入hdfs

2024-06-09 10:32

本文主要是介绍hadoop从入门到放弃(一)之flume获取数据存入hdfs,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

一、解压flume/hadoop/目录下

tar -zxvf apache-flume-1.6.0-bin.tar.gz  -C /hadoop/


二、配置flume配置文件

[hadoop@hadoop01 flume]$ cat conf/agent1.conf# Name the components on this agentagent1.sources = spooldirSourceagent1.channels = fileChannelagent1.sinks = hdfsSink# Describe/configure the sourceagent1.sources.spooldirSource.type=spooldiragent1.sources.spooldirSource.spoolDir=/home/hadoop/spooldir# Describe the sinkagent1.sinks.hdfsSink.type=hdfsagent1.sinks.hdfsSink.hdfs.path=hdfs://hadoop01:9000/flume/%y-%m-%d/%H%M/%Sagent1.sinks.hdfsSink.hdfs.round = trueagent1.sinks.hdfsSink.hdfs.roundValue = 10agent1.sinks.hdfsSink.hdfs.roundUnit = minuteagent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = trueagent1.sinks.hdfsSink.hdfs.fileType=DataStream  # Describe the channelagent1.channels.fileChannel.type = fileagent1.channels.fileChannel.dataDirs=/hadoop/flume/datadir# Bind the source and sink to the channelagent1.sources.spooldirSource.channels=fileChannelagent1.sinks.hdfsSink.channel=fileChannel


三、启动flume

进入flume home目录

bin/flume-ng agent --conf conf --conf-file conf/agent1.conf --name agent1 -Dflume.root.logger=INFO,console

 

启动成功有如下输出

....................................2016-08-09 16:28:33,888 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink hdfsSink2016-08-09 16:28:33,891 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source spooldirSource2016-08-09 16:28:33,891 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.SpoolDirectorySource.start(SpoolDirectorySource.java:78)] SpoolDirectorySource source starting with directory: /home/hadoop/spooldir2016-08-09 16:28:33,900 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: hdfsSink: Successfully registered new MBean.2016-08-09 16:28:33,900 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: hdfsSink started2016-08-09 16:28:33,925 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: spooldirSource: Successfully registered new MBean.2016-08-09 16:28:33,925 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: spooldirSource started


 

四、将日志写入到flume spooldir

写入完成之后可以看到如下输出:

2016-08-09 16:36:51,204 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.2016-08-09 16:36:51,204 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /home/hadoop/spooldir/HTTP_20130313143750.dat to /home/hadoop/spooldir/HTTP_20130313143750.dat.COMPLETED2016-08-09 16:36:53,965 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false2016-08-09 16:36:54,206 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp2016-08-09 16:36:56,772 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp2016-08-09 16:36:56,903 (hdfs-hdfsSink-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139662016-08-09 16:36:57,149 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp2016-08-09 16:36:57,637 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp2016-08-09 16:36:57,805 (hdfs-hdfsSink-call-runner-7) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139672016-08-09 16:36:57,955 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp2016-08-09 16:37:03,525 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint(EventQueueBackingStoreFile.java:230)] Start checkpoint for /home/hadoop/.flume/file-channel/checkpoint/checkpoint, elements to sync = 222016-08-09 16:37:03,566 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:255)] Updating checkpoint metadata: logWriteOrderID: 1470731313610, queueSize: 0, queueHead: 202016-08-09 16:37:03,572 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:1034)] Updated checkpoint for file: /hadoop/flume/datadir/log-5 position: 4155 logWriteOrderID: 14707313136102016-08-09 16:37:03,572 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.LogFile$RandomReader.close(LogFile.java:504)] Closing RandomReader /hadoop/flume/datadir/log-32016-08-09 16:37:28,072 (hdfs-hdfsSink-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp2016-08-09 16:37:28,182 (hdfs-hdfsSink-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139682016-08-09 16:37:28,364 (hdfs-hdfsSink-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink$1.run(HDFSEventSink.java:394)] Writer callback called.


 

 

查看hdfs上相应目录

[hadoop@hadoop01 spooldir]$ hadoop fs -ls /flume/16-08-09/1630/00Found 3 items-rw-r--r--   3 hadoop supergroup        969 2016-08-09 16:36 /flume/16-08-09/1630/00/FlumeData.1470731813966-rw-r--r--   3 hadoop supergroup       1070 2016-08-09 16:36 /flume/16-08-09/1630/00/FlumeData.1470731813967-rw-r--r--   3 hadoop supergroup        191 2016-08-09 16:37 /flume/16-08-09/1630/00/FlumeData.1470731813968


 

查看hdfs上文件内容

[hadoop@hadoop01 spooldir]$ hadoop fs -cat /flume/16-08-09/1630/00/*1363157985066 13726230503 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24682001363157995052 13826544101 5C-0E-8B-C7-F1-E0:CMCC 120.197.40.4 4 0 264 0 2001363157991076 13926435656 20-10-7A-28-CC-0A:CMCC 120.196.100.99 2 4 132 1512 2001363154400022 13926251106 5C-0E-8B-8B-B1-50:CMCC 120.197.40.4 4 0 240 0 2001363157993044 18211575961 94-71-AC-CD-E6-18:CMCC-EASY 120.196.100.99 iface.qiyi.com 视频网站 15 12 15272106 2001363157995074 84138413 5C-0E-8B-8C-E8-20:7DaysInn 120.197.40.4 122.72.52.12 20 16 4116 14322001363157993055 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 2001363157995033 15920133257 5C-0E-8B-C7-BA-20:CMCC 120.197.40.4 sug.so.360.cn 信息安全 20 20 3156 29362001363157983019 13719199419 68-A1-B7-03-07-B1:CMCC-EASY 120.196.100.82 4 0 240 0 2001363157984041 13660577991 5C-0E-8B-92-5C-20:CMCC-EASY 120.197.40.4 s19.cnzz.com 站点统计 24 9 6960690 2001363157973098 15013685858 5C-0E-8B-C7-F7-90:CMCC 120.197.40.4 rank.ie.sogou.com 搜索引擎 28 27 36593538 2001363157986029 15989002119 E8-99-C4-4E-93-E0:CMCC-EASY 120.196.100.99 www.umeng.com 站点统计 3 3 1938180 2001363157992093 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 15 9 918 4938 2001363157986041 13480253104 5C-0E-8B-C7-FC-80:CMCC-EASY 120.197.40.4 3 3 180 180 2001363157984040 13602846565 5C-0E-8B-8B-B6-00:CMCC 120.197.40.4 2052.flash2-http.qq.com 综合门户 15 12 19382910 2001363157995093 13922314466 00-FD-07-A2-EC-BA:CMCC 120.196.100.82 img.qfc.cn 12 12 3008 3720 2001363157982040 13502468823 5C-0A-5B-6A-0B-D4:CMCC-EASY 120.196.100.99 y0.ifengimg.com 综合门户 57 102 7335110349 2001363157986072 18320173382 84-25-DB-4F-10-1A:CMCC-EASY 120.196.100.99 input.shouji.sogou.com 搜索引擎 21 18  9531 2412 2001363157990043 13925057413 00-1F-64-E1-E6-9A:CMCC 120.196.100.55 t3.baidu.com 搜索引擎 69 63 11058 48242001363157988072 13760778710 00-FD-07-A4-7B-08:CMCC 120.196.100.82 2 2 120 120 2001363157985066 13726238888 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24682001363157993055 13560436666 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200[hadoop@hadoop01 spooldir]$ 


上传成功

这篇关于hadoop从入门到放弃(一)之flume获取数据存入hdfs的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1044923

相关文章

Spring Security 从入门到进阶系列教程

Spring Security 入门系列 《保护 Web 应用的安全》 《Spring-Security-入门(一):登录与退出》 《Spring-Security-入门(二):基于数据库验证》 《Spring-Security-入门(三):密码加密》 《Spring-Security-入门(四):自定义-Filter》 《Spring-Security-入门(五):在 Sprin

Hadoop企业开发案例调优场景

需求 (1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。 (2)需求分析: 1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster 平均每个节点运行10个 / 3台 ≈ 3个任务(4    3    3) HDFS参数调优 (1)修改:hadoop-env.sh export HDFS_NAMENOD

HDFS—存储优化(纠删码)

纠删码原理 HDFS 默认情况下,一个文件有3个副本,这样提高了数据的可靠性,但也带来了2倍的冗余开销。 Hadoop3.x 引入了纠删码,采用计算的方式,可以节省约50%左右的存储空间。 此种方式节约了空间,但是会增加 cpu 的计算。 纠删码策略是给具体一个路径设置。所有往此路径下存储的文件,都会执行此策略。 默认只开启对 RS-6-3-1024k

HDFS—集群扩容及缩容

白名单:表示在白名单的主机IP地址可以,用来存储数据。 配置白名单步骤如下: 1)在NameNode节点的/opt/module/hadoop-3.1.4/etc/hadoop目录下分别创建whitelist 和blacklist文件 (1)创建白名单 [lytfly@hadoop102 hadoop]$ vim whitelist 在whitelist中添加如下主机名称,假如集群正常工作的节

Hadoop集群数据均衡之磁盘间数据均衡

生产环境,由于硬盘空间不足,往往需要增加一块硬盘。刚加载的硬盘没有数据时,可以执行磁盘数据均衡命令。(Hadoop3.x新特性) plan后面带的节点的名字必须是已经存在的,并且是需要均衡的节点。 如果节点不存在,会报如下错误: 如果节点只有一个硬盘的话,不会创建均衡计划: (1)生成均衡计划 hdfs diskbalancer -plan hadoop102 (2)执行均衡计划 hd

hadoop开启回收站配置

开启回收站功能,可以将删除的文件在不超时的情况下,恢复原数据,起到防止误删除、备份等作用。 开启回收站功能参数说明 (1)默认值fs.trash.interval = 0,0表示禁用回收站;其他值表示设置文件的存活时间。 (2)默认值fs.trash.checkpoint.interval = 0,检查回收站的间隔时间。如果该值为0,则该值设置和fs.trash.interval的参数值相等。

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

数论入门整理(updating)

一、gcd lcm 基础中的基础,一般用来处理计算第一步什么的,分数化简之类。 LL gcd(LL a, LL b) { return b ? gcd(b, a % b) : a; } <pre name="code" class="cpp">LL lcm(LL a, LL b){LL c = gcd(a, b);return a / c * b;} 例题:

Java 创建图形用户界面(GUI)入门指南(Swing库 JFrame 类)概述

概述 基本概念 Java Swing 的架构 Java Swing 是一个为 Java 设计的 GUI 工具包,是 JAVA 基础类的一部分,基于 Java AWT 构建,提供了一系列轻量级、可定制的图形用户界面(GUI)组件。 与 AWT 相比,Swing 提供了许多比 AWT 更好的屏幕显示元素,更加灵活和可定制,具有更好的跨平台性能。 组件和容器 Java Swing 提供了许多

【IPV6从入门到起飞】5-1 IPV6+Home Assistant(搭建基本环境)

【IPV6从入门到起飞】5-1 IPV6+Home Assistant #搭建基本环境 1 背景2 docker下载 hass3 创建容器4 浏览器访问 hass5 手机APP远程访问hass6 更多玩法 1 背景 既然电脑可以IPV6入站,手机流量可以访问IPV6网络的服务,为什么不在电脑搭建Home Assistant(hass),来控制你的设备呢?@智能家居 @万物互联