本文主要是介绍hadoop从入门到放弃(一)之flume获取数据存入hdfs,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
一、解压flume到/hadoop/目录下
tar -zxvf apache-flume-1.6.0-bin.tar.gz -C /hadoop/
二、配置flume配置文件
[hadoop@hadoop01 flume]$ cat conf/agent1.conf# Name the components on this agentagent1.sources = spooldirSourceagent1.channels = fileChannelagent1.sinks = hdfsSink# Describe/configure the sourceagent1.sources.spooldirSource.type=spooldiragent1.sources.spooldirSource.spoolDir=/home/hadoop/spooldir# Describe the sinkagent1.sinks.hdfsSink.type=hdfsagent1.sinks.hdfsSink.hdfs.path=hdfs://hadoop01:9000/flume/%y-%m-%d/%H%M/%Sagent1.sinks.hdfsSink.hdfs.round = trueagent1.sinks.hdfsSink.hdfs.roundValue = 10agent1.sinks.hdfsSink.hdfs.roundUnit = minuteagent1.sinks.hdfsSink.hdfs.useLocalTimeStamp = trueagent1.sinks.hdfsSink.hdfs.fileType=DataStream # Describe the channelagent1.channels.fileChannel.type = fileagent1.channels.fileChannel.dataDirs=/hadoop/flume/datadir# Bind the source and sink to the channelagent1.sources.spooldirSource.channels=fileChannelagent1.sinks.hdfsSink.channel=fileChannel
三、启动flume
进入flume home目录
bin/flume-ng agent --conf conf --conf-file conf/agent1.conf --name agent1 -Dflume.root.logger=INFO,console
启动成功有如下输出
....................................2016-08-09 16:28:33,888 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink hdfsSink2016-08-09 16:28:33,891 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source spooldirSource2016-08-09 16:28:33,891 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.SpoolDirectorySource.start(SpoolDirectorySource.java:78)] SpoolDirectorySource source starting with directory: /home/hadoop/spooldir2016-08-09 16:28:33,900 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: hdfsSink: Successfully registered new MBean.2016-08-09 16:28:33,900 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: hdfsSink started2016-08-09 16:28:33,925 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: spooldirSource: Successfully registered new MBean.2016-08-09 16:28:33,925 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: spooldirSource started
四、将日志写入到flume spooldir下
写入完成之后可以看到如下输出:
2016-08-09 16:36:51,204 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.2016-08-09 16:36:51,204 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /home/hadoop/spooldir/HTTP_20130313143750.dat to /home/hadoop/spooldir/HTTP_20130313143750.dat.COMPLETED2016-08-09 16:36:53,965 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false2016-08-09 16:36:54,206 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp2016-08-09 16:36:56,772 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp2016-08-09 16:36:56,903 (hdfs-hdfsSink-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813966.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139662016-08-09 16:36:57,149 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp2016-08-09 16:36:57,637 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp2016-08-09 16:36:57,805 (hdfs-hdfsSink-call-runner-7) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813967.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139672016-08-09 16:36:57,955 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:234)] Creating hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp2016-08-09 16:37:03,525 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint(EventQueueBackingStoreFile.java:230)] Start checkpoint for /home/hadoop/.flume/file-channel/checkpoint/checkpoint, elements to sync = 222016-08-09 16:37:03,566 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint(EventQueueBackingStoreFile.java:255)] Updating checkpoint metadata: logWriteOrderID: 1470731313610, queueSize: 0, queueHead: 202016-08-09 16:37:03,572 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:1034)] Updated checkpoint for file: /hadoop/flume/datadir/log-5 position: 4155 logWriteOrderID: 14707313136102016-08-09 16:37:03,572 (Log-BackgroundWorker-fileChannel) [INFO - org.apache.flume.channel.file.LogFile$RandomReader.close(LogFile.java:504)] Closing RandomReader /hadoop/flume/datadir/log-32016-08-09 16:37:28,072 (hdfs-hdfsSink-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:363)] Closing hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp2016-08-09 16:37:28,182 (hdfs-hdfsSink-call-runner-3) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:629)] Renaming hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.1470731813968.tmp to hdfs://hadoop01:9000/flume/16-08-09/1630/00/FlumeData.14707318139682016-08-09 16:37:28,364 (hdfs-hdfsSink-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink$1.run(HDFSEventSink.java:394)] Writer callback called.
查看hdfs上相应目录
[hadoop@hadoop01 spooldir]$ hadoop fs -ls /flume/16-08-09/1630/00Found 3 items-rw-r--r-- 3 hadoop supergroup 969 2016-08-09 16:36 /flume/16-08-09/1630/00/FlumeData.1470731813966-rw-r--r-- 3 hadoop supergroup 1070 2016-08-09 16:36 /flume/16-08-09/1630/00/FlumeData.1470731813967-rw-r--r-- 3 hadoop supergroup 191 2016-08-09 16:37 /flume/16-08-09/1630/00/FlumeData.1470731813968
查看hdfs上文件内容
[hadoop@hadoop01 spooldir]$ hadoop fs -cat /flume/16-08-09/1630/00/*1363157985066 13726230503 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24682001363157995052 13826544101 5C-0E-8B-C7-F1-E0:CMCC 120.197.40.4 4 0 264 0 2001363157991076 13926435656 20-10-7A-28-CC-0A:CMCC 120.196.100.99 2 4 132 1512 2001363154400022 13926251106 5C-0E-8B-8B-B1-50:CMCC 120.197.40.4 4 0 240 0 2001363157993044 18211575961 94-71-AC-CD-E6-18:CMCC-EASY 120.196.100.99 iface.qiyi.com 视频网站 15 12 15272106 2001363157995074 84138413 5C-0E-8B-8C-E8-20:7DaysInn 120.197.40.4 122.72.52.12 20 16 4116 14322001363157993055 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 2001363157995033 15920133257 5C-0E-8B-C7-BA-20:CMCC 120.197.40.4 sug.so.360.cn 信息安全 20 20 3156 29362001363157983019 13719199419 68-A1-B7-03-07-B1:CMCC-EASY 120.196.100.82 4 0 240 0 2001363157984041 13660577991 5C-0E-8B-92-5C-20:CMCC-EASY 120.197.40.4 s19.cnzz.com 站点统计 24 9 6960690 2001363157973098 15013685858 5C-0E-8B-C7-F7-90:CMCC 120.197.40.4 rank.ie.sogou.com 搜索引擎 28 27 36593538 2001363157986029 15989002119 E8-99-C4-4E-93-E0:CMCC-EASY 120.196.100.99 www.umeng.com 站点统计 3 3 1938180 2001363157992093 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 15 9 918 4938 2001363157986041 13480253104 5C-0E-8B-C7-FC-80:CMCC-EASY 120.197.40.4 3 3 180 180 2001363157984040 13602846565 5C-0E-8B-8B-B6-00:CMCC 120.197.40.4 2052.flash2-http.qq.com 综合门户 15 12 19382910 2001363157995093 13922314466 00-FD-07-A2-EC-BA:CMCC 120.196.100.82 img.qfc.cn 12 12 3008 3720 2001363157982040 13502468823 5C-0A-5B-6A-0B-D4:CMCC-EASY 120.196.100.99 y0.ifengimg.com 综合门户 57 102 7335110349 2001363157986072 18320173382 84-25-DB-4F-10-1A:CMCC-EASY 120.196.100.99 input.shouji.sogou.com 搜索引擎 21 18 9531 2412 2001363157990043 13925057413 00-1F-64-E1-E6-9A:CMCC 120.196.100.55 t3.baidu.com 搜索引擎 69 63 11058 48242001363157988072 13760778710 00-FD-07-A4-7B-08:CMCC 120.196.100.82 2 2 120 120 2001363157985066 13726238888 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24682001363157993055 13560436666 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200[hadoop@hadoop01 spooldir]$
上传成功
这篇关于hadoop从入门到放弃(一)之flume获取数据存入hdfs的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!