Hadoop词频统计(二)之本地模式运行

2024-06-09 10:38

本文主要是介绍Hadoop词频统计(二)之本地模式运行,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

想要在windows上以本地模式运行hadoop就必须要在windows上配置好hadoop的本地运行环境。我们需要下载编译好的hadoop二进制包。

下载地址如下:

链接:http://pan.baidu.com/s/1skE4fQt 密码:or48

下载完成后配置windows环境变量:

HADOOP_HOME=C:\Program Files (x86)\hadoop-2.6.0

PATH=%PATH%:%HADOOP_HOME%\bin

map:

package cn.hadoop.mr;import java.io.IOException;import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.util.StringUtils;public class WCMapper extends Mapper<LongWritable, Text, Text, LongWritable>{@Overrideprotected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, LongWritable>.Context context)throws IOException, InterruptedException {// TODO Auto-generated method stubString line = value.toString();String[] words = StringUtils.split(line,' ');for(String word : words) {context.write(new Text(word), new LongWritable(1));}}
}
reduce:

package cn.hadoop.mr;import java.io.IOException;import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;public class WCReducer extends Reducer<Text, LongWritable, Text, LongWritable> {@Overrideprotected void reduce(Text key, Iterable<LongWritable> values,Reducer<Text, LongWritable, Text, LongWritable>.Context context) throws IOException, InterruptedException {long count = 0;for(LongWritable value : values) {count += value.get();}context.write(key, new LongWritable(count));}
}

run:

package cn.hadoop.mr;import java.io.IOException;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class WCRunner {public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {Configuration conf = new Configuration();Job wcjob = Job.getInstance(conf);wcjob.setJarByClass(WCRunner.class);wcjob.setMapperClass(WCMapper.class);wcjob.setReducerClass(WCReducer.class);wcjob.setOutputKeyClass(Text.class);wcjob.setOutputValueClass(LongWritable.class);wcjob.setMapOutputKeyClass(Text.class);wcjob.setMapOutputValueClass(LongWritable.class);FileInputFormat.setInputPaths(wcjob, "E:/wc/inputdata/");FileOutputFormat.setOutputPath(wcjob, new Path("E:/wc/output/"));wcjob.waitForCompletion(true);}
}

缺少jar包的话就把C:\Program Files (x86)\hadoop-2.6.0\share\hadoop文件夹下面的所有jar包引入进项目。

然后在eclipse中直接以java application方式运行main方法即可。

输出结果如下:

2016-07-25 15:47:06,565 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1049)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2016-07-25 15:47:06,569 INFO  [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2016-07-25 15:47:06,751 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(153)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2016-07-25 15:47:06,752 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(261)) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2016-07-25 15:47:06,796 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 1
2016-07-25 15:47:06,836 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(494)) - number of splits:1
2016-07-25 15:47:06,910 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(583)) - Submitting tokens for job: job_local1228851727_0001
2016-07-25 15:47:07,087 INFO  [main] mapreduce.Job (Job.java:submit(1300)) - The url to track the job: http://localhost:8080/
2016-07-25 15:47:07,088 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1345)) - Running job: job_local1228851727_0001
2016-07-25 15:47:07,089 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
2016-07-25 15:47:07,094 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2016-07-25 15:47:07,131 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
2016-07-25 15:47:07,132 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local1228851727_0001_m_000000_0
2016-07-25 15:47:07,156 INFO  [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
2016-07-25 15:47:07,182 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@6db06d7d
2016-07-25 15:47:07,185 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(753)) - Processing split: file:/E:/wc/inputdata/in.dat:0+78
2016-07-25 15:47:07,225 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1202)) - (EQUATOR) 0 kvi 26214396(104857584)
2016-07-25 15:47:07,225 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(995)) - mapreduce.task.io.sort.mb: 100
2016-07-25 15:47:07,225 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(996)) - soft limit at 83886080
2016-07-25 15:47:07,225 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(997)) - bufstart = 0; bufvoid = 104857600
2016-07-25 15:47:07,225 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(998)) - kvstart = 26214396; length = 6553600
2016-07-25 15:47:07,228 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(402)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2016-07-25 15:47:07,234 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
2016-07-25 15:47:07,234 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1457)) - Starting flush of map output
2016-07-25 15:47:07,234 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1475)) - Spilling map output
2016-07-25 15:47:07,234 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1476)) - bufstart = 0; bufend = 174; bufvoid = 104857600
2016-07-25 15:47:07,234 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1478)) - kvstart = 26214396(104857584); kvend = 26214352(104857408); length = 45/6553600
2016-07-25 15:47:07,243 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1660)) - Finished spill 0
2016-07-25 15:47:07,248 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(1001)) - Task:attempt_local1228851727_0001_m_000000_0 is done. And is in the process of committing
2016-07-25 15:47:07,256 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
2016-07-25 15:47:07,256 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1228851727_0001_m_000000_0' done.
2016-07-25 15:47:07,256 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local1228851727_0001_m_000000_0
2016-07-25 15:47:07,256 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.
2016-07-25 15:47:07,259 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks
2016-07-25 15:47:07,259 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local1228851727_0001_r_000000_0
2016-07-25 15:47:07,266 INFO  [pool-3-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(181)) - ProcfsBasedProcessTree currently is supported only on Linux.
2016-07-25 15:47:07,294 INFO  [pool-3-thread-1] mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@57baec0e
2016-07-25 15:47:07,297 INFO  [pool-3-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@7c165ec0
2016-07-25 15:47:07,306 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(196)) - MergerManager: memoryLimit=1503238528, maxSingleShuffleLimit=375809632, mergeThreshold=992137472, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2016-07-25 15:47:07,308 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local1228851727_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2016-07-25 15:47:07,334 INFO  [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(141)) - localfetcher#1 about to shuffle output of map attempt_local1228851727_0001_m_000000_0 decomp: 200 len: 204 to MEMORY
2016-07-25 15:47:07,338 INFO  [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 200 bytes from map-output for attempt_local1228851727_0001_m_000000_0
2016-07-25 15:47:07,361 INFO  [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(314)) - closeInMemoryFile -> map-output of size: 200, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->200
2016-07-25 15:47:07,362 INFO  [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
2016-07-25 15:47:07,363 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2016-07-25 15:47:07,363 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(674)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2016-07-25 15:47:07,369 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(597)) - Merging 1 sorted segments
2016-07-25 15:47:07,370 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(696)) - Down to the last merge-pass, with 1 segments left of total size: 193 bytes
2016-07-25 15:47:07,371 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(751)) - Merged 1 segments, 200 bytes to disk to satisfy reduce memory limit
2016-07-25 15:47:07,372 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(781)) - Merging 1 files, 204 bytes from disk
2016-07-25 15:47:07,373 INFO  [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(796)) - Merging 0 segments, 0 bytes from memory into reduce
2016-07-25 15:47:07,373 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(597)) - Merging 1 sorted segments
2016-07-25 15:47:07,373 INFO  [pool-3-thread-1] mapred.Merger (Merger.java:merge(696)) - Down to the last merge-pass, with 1 segments left of total size: 193 bytes
2016-07-25 15:47:07,374 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2016-07-25 15:47:07,377 INFO  [pool-3-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1049)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2016-07-25 15:47:07,385 INFO  [pool-3-thread-1] mapred.Task (Task.java:done(1001)) - Task:attempt_local1228851727_0001_r_000000_0 is done. And is in the process of committing
2016-07-25 15:47:07,387 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
2016-07-25 15:47:07,387 INFO  [pool-3-thread-1] mapred.Task (Task.java:commit(1162)) - Task attempt_local1228851727_0001_r_000000_0 is allowed to commit now
2016-07-25 15:47:07,387 INFO  [pool-3-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task 'attempt_local1228851727_0001_r_000000_0' to file:/E:/wc/output/_temporary/0/task_local1228851727_0001_r_000000
2016-07-25 15:47:07,387 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce > reduce
2016-07-25 15:47:07,387 INFO  [pool-3-thread-1] mapred.Task (Task.java:sendDone(1121)) - Task 'attempt_local1228851727_0001_r_000000_0' done.
2016-07-25 15:47:07,387 INFO  [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local1228851727_0001_r_000000_0
2016-07-25 15:47:07,388 INFO  [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete.
2016-07-25 15:47:08,090 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1366)) - Job job_local1228851727_0001 running in uber mode : false
2016-07-25 15:47:08,094 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) -  map 100% reduce 100%
2016-07-25 15:47:08,097 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1384)) - Job job_local1228851727_0001 completed successfully
2016-07-25 15:47:08,128 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1391)) - Counters: 33File System CountersFILE: Number of bytes read=890FILE: Number of bytes written=525466FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0Map-Reduce FrameworkMap input records=6Map output records=12Map output bytes=174Map output materialized bytes=204Input split bytes=93Combine input records=0Combine output records=0Reduce input groups=5Reduce shuffle bytes=204Reduce input records=12Reduce output records=5Spilled Records=24Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=0CPU time spent (ms)=0Physical memory (bytes) snapshot=0Virtual memory (bytes) snapshot=0Total committed heap usage (bytes)=504758272Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=78File Output Format Counters Bytes Written=56
文件内容如下:

haha    4
hehe    2
heiheihei    2
lalala    1
lololo    3


这篇关于Hadoop词频统计(二)之本地模式运行的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1044949

相关文章

Linux使用nohup命令在后台运行脚本

《Linux使用nohup命令在后台运行脚本》在Linux或类Unix系统中,后台运行脚本是一项非常实用的技能,尤其适用于需要长时间运行的任务或服务,本文我们来看看如何使用nohup命令在后台... 目录nohup 命令简介基本用法输出重定向& 符号的作用后台进程的特点注意事项实际应用场景长时间运行的任务服

使用JavaScript操作本地存储

《使用JavaScript操作本地存储》这篇文章主要为大家详细介绍了JavaScript中操作本地存储的相关知识,文中的示例代码讲解详细,具有一定的借鉴价值,有需要的小伙伴可以参考一下... 目录本地存储:localStorage 和 sessionStorage基本使用方法1. localStorage

如何在一台服务器上使用docker运行kafka集群

《如何在一台服务器上使用docker运行kafka集群》文章详细介绍了如何在一台服务器上使用Docker运行Kafka集群,包括拉取镜像、创建网络、启动Kafka容器、检查运行状态、编写启动和关闭脚本... 目录1.拉取镜像2.创建集群之间通信的网络3.将zookeeper加入到网络中4.启动kafka集群

opencv实现像素统计的示例代码

《opencv实现像素统计的示例代码》本文介绍了OpenCV中统计图像像素信息的常用方法和函数,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一... 目录1. 统计像素值的基本信息2. 统计像素值的直方图3. 统计像素值的总和4. 统计非零像素的数量

如何使用 Bash 脚本中的time命令来统计命令执行时间(中英双语)

《如何使用Bash脚本中的time命令来统计命令执行时间(中英双语)》本文介绍了如何在Bash脚本中使用`time`命令来测量命令执行时间,包括`real`、`user`和`sys`三个时间指标,... 使用 Bash 脚本中的 time 命令来统计命令执行时间在日常的开发和运维过程中,性能监控和优化是不

Nacos客户端本地缓存和故障转移方式

《Nacos客户端本地缓存和故障转移方式》Nacos客户端在从Server获得服务时,若出现故障,会通过ServiceInfoHolder和FailoverReactor进行故障转移,ServiceI... 目录1. ServiceInfoHolder本地缓存目录2. FailoverReactorinit

PostgreSQL如何用psql运行SQL文件

《PostgreSQL如何用psql运行SQL文件》文章介绍了两种运行预写好的SQL文件的方式:首先连接数据库后执行,或者直接通过psql命令执行,需要注意的是,文件路径在Linux系统中应使用斜杠/... 目录PostgreSQ编程L用psql运行SQL文件方式一方式二总结PostgreSQL用psql运

Hadoop企业开发案例调优场景

需求 (1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。 (2)需求分析: 1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster 平均每个节点运行10个 / 3台 ≈ 3个任务(4    3    3) HDFS参数调优 (1)修改:hadoop-env.sh export HDFS_NAMENOD

Hadoop集群数据均衡之磁盘间数据均衡

生产环境,由于硬盘空间不足,往往需要增加一块硬盘。刚加载的硬盘没有数据时,可以执行磁盘数据均衡命令。(Hadoop3.x新特性) plan后面带的节点的名字必须是已经存在的,并且是需要均衡的节点。 如果节点不存在,会报如下错误: 如果节点只有一个硬盘的话,不会创建均衡计划: (1)生成均衡计划 hdfs diskbalancer -plan hadoop102 (2)执行均衡计划 hd

hadoop开启回收站配置

开启回收站功能,可以将删除的文件在不超时的情况下,恢复原数据,起到防止误删除、备份等作用。 开启回收站功能参数说明 (1)默认值fs.trash.interval = 0,0表示禁用回收站;其他值表示设置文件的存活时间。 (2)默认值fs.trash.checkpoint.interval = 0,检查回收站的间隔时间。如果该值为0,则该值设置和fs.trash.interval的参数值相等。