配置Hadoop2.x的HDFS、MapReduce来运行WordCount程序

2024-06-14 06:18

本文主要是介绍配置Hadoop2.x的HDFS、MapReduce来运行WordCount程序,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

主机HDFSMapReduce
node1NameNodeResourceManager
node2SecondaryNameNode & DataNodeNodeManager
node3DataNodeNodeManager
node4DataNodeNodeManager

1.配置hadoop-env.sh

export JAVA_HOME=/csh/link/jdk

2.配置core-site.xml

<property><name>fs.defaultFS</name><value>hdfs://node1:9000</value>
</property>
<property><name>hadoop.tmp.dir</name><value>/csh/hadoop/hadoop2.7.2/tmp</value>
</property>

3.配置hdfs-site.xml

<property><name>dfs.namenode.http-address</name><value>node1:50070</value>
</property>
<property><name>dfs.namenode.secondary.http-address</name><value>node2:50090</value>
</property>
<property><name>dfs.namenode.name.dir</name><value>/csh/hadoop/hadoop2.7.2/name</value>
</property>
<property><name>dfs.datanode.data.dir</name><value>/csh/hadoop/hadoop2.7.2/data</value>
</property>
<property><name>dfs.replication</name><value>3</value>
</property>

4.配置mapred-site.xml

<property><name>mapreduce.framework.name</name><value>yarn</value>
</property>

5.配置yarn-site.xml

<property><name>yarn.resourcemanager.hostname</name><value>node1</value>
</property>
<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value>
</property>
<property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

6.配置masters

node2

7.配置slaves

node2
node3
node4

8.启动Hadoop

bin/hadoop namenode -format
sbin/start-dfs.sh
sbin/start-yarn.sh

9.运行WordCount程序

//创建文件wc.txt
echo "I love Java I love Hadoop I love BigData Good Good Study, Day Day Up" > wc.txt
//创建HDFS中的文件
hdfs dfs -mkdir -p /input/wordcount/
//将wc.txt上传到HDFS中
hdfs dfs -put wc.txt /input/wordcount
//运行WordCount程序
hadoop jar /csh/software/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /input/wordcount/ /output/wordcount/

10.结果

[root@node1 sbin]# hadoop jar /csh/software/hadoop-2.7.2/share/hadoop/mapreduce/hadoapreduce-examples-2.7.2.jar wordcount /input/wordcount/ /output/wordcount/
16/03/24 19:26:48 INFO client.RMProxy: Connecting to ResourceManager at node1/192.161.11:8032
16/03/24 19:26:56 INFO input.FileInputFormat: Total input paths to process : 1
16/03/24 19:26:56 INFO mapreduce.JobSubmitter: number of splits:1
16/03/24 19:26:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_145887175_0001
16/03/24 19:26:59 INFO impl.YarnClientImpl: Submitted application application_145887175_0001
16/03/24 19:27:00 INFO mapreduce.Job: The url to track the job: http://node1:8088/prapplication_1458872237175_0001/
16/03/24 19:27:00 INFO mapreduce.Job: Running job: job_1458872237175_0001
16/03/24 19:28:13 INFO mapreduce.Job: Job job_1458872237175_0001 running in uber modfalse
16/03/24 19:28:13 INFO mapreduce.Job:  map 0% reduce 0%
16/03/24 19:30:07 INFO mapreduce.Job:  map 100% reduce 0%
16/03/24 19:31:13 INFO mapreduce.Job:  map 100% reduce 33%
16/03/24 19:31:16 INFO mapreduce.Job:  map 100% reduce 100%
16/03/24 19:31:23 INFO mapreduce.Job: Job job_1458872237175_0001 completed successfu
16/03/24 19:31:24 INFO mapreduce.Job: Counters: 49File System CountersFILE: Number of bytes read=106FILE: Number of bytes written=235387FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=174HDFS: Number of bytes written=64HDFS: Number of read operations=6HDFS: Number of large read operations=0HDFS: Number of write operations=2Job Counters Launched map tasks=1Launched reduce tasks=1Data-local map tasks=1Total time spent by all maps in occupied slots (ms)=116501Total time spent by all reduces in occupied slots (ms)=53945Total time spent by all map tasks (ms)=116501Total time spent by all reduce tasks (ms)=53945Total vcore-milliseconds taken by all map tasks=116501Total vcore-milliseconds taken by all reduce tasks=53945Total megabyte-milliseconds taken by all map tasks=119297024Total megabyte-milliseconds taken by all reduce tasks=55239680Map-Reduce FrameworkMap input records=4Map output records=15Map output bytes=129Map output materialized bytes=106Input split bytes=105Combine input records=15Combine output records=9Reduce input groups=9Reduce shuffle bytes=106Reduce input records=9Reduce output records=9Spilled Records=18Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=1468CPU time spent (ms)=6780Physical memory (bytes) snapshot=230531072Virtual memory (bytes) snapshot=4152713216Total committed heap usage (bytes)=134795264Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=69File Output Format Counters Bytes Written=64
[root@node1 sbin]# hdfs dfs -cat /output/wordcount/*
BigData 1
Day 2
Good    2
Hadoop  1
I   3
Java    1
Study,  1
Up  1
love    3

个人博客原文:
配置Hadoop2.x的HDFS、MapReduce来运行WordCount程序

这篇关于配置Hadoop2.x的HDFS、MapReduce来运行WordCount程序的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1059611

相关文章

Spring配置扩展之JavaConfig的使用小结

《Spring配置扩展之JavaConfig的使用小结》JavaConfig是Spring框架中基于纯Java代码的配置方式,用于替代传统的XML配置,通过注解(如@Bean)定义Spring容器的组... 目录JavaConfig 的概念什么是JavaConfig?为什么使用 JavaConfig?Jav

Spring Boot Interceptor的原理、配置、顺序控制及与Filter的关键区别对比分析

《SpringBootInterceptor的原理、配置、顺序控制及与Filter的关键区别对比分析》本文主要介绍了SpringBoot中的拦截器(Interceptor)及其与过滤器(Filt... 目录前言一、核心功能二、拦截器的实现2.1 定义自定义拦截器2.2 注册拦截器三、多拦截器的执行顺序四、过

springboot的controller中如何获取applicatim.yml的配置值

《springboot的controller中如何获取applicatim.yml的配置值》本文介绍了在SpringBoot的Controller中获取application.yml配置值的四种方式,... 目录1. 使用@Value注解(最常用)application.yml 配置Controller 中

springboot中配置logback-spring.xml的方法

《springboot中配置logback-spring.xml的方法》文章介绍了如何在SpringBoot项目中配置logback-spring.xml文件来进行日志管理,包括如何定义日志输出方式、... 目录一、在src/main/resources目录下,也就是在classpath路径下创建logba

C++多线程开发环境配置方法

《C++多线程开发环境配置方法》文章详细介绍了如何在Windows上安装MinGW-w64和VSCode,并配置环境变量和编译任务,使用VSCode创建一个C++多线程测试项目,并通过配置tasks.... 目录下载安装 MinGW-w64下载安装VS code创建测试项目配置编译任务创建 tasks.js

Nginx概念、架构、配置与虚拟主机实战操作指南

《Nginx概念、架构、配置与虚拟主机实战操作指南》Nginx是一个高性能的HTTP服务器、反向代理服务器、负载均衡器和IMAP/POP3/SMTP代理服务器,它支持高并发连接,资源占用低,功能全面且... 目录Nginx 深度解析:概念、架构、配置与虚拟主机实战一、Nginx 的概念二、Nginx 的特点

2025最新版Android Studio安装及组件配置教程(SDK、JDK、Gradle)

《2025最新版AndroidStudio安装及组件配置教程(SDK、JDK、Gradle)》:本文主要介绍2025最新版AndroidStudio安装及组件配置(SDK、JDK、Gradle... 目录原生 android 简介Android Studio必备组件一、Android Studio安装二、A

前端Visual Studio Code安装配置教程之下载、汉化、常用组件及基本操作

《前端VisualStudioCode安装配置教程之下载、汉化、常用组件及基本操作》VisualStudioCode是微软推出的一个强大的代码编辑器,功能强大,操作简单便捷,还有着良好的用户界面,... 目录一、Visual Studio Code下载二、汉化三、常用组件1、Auto Rename Tag2

SpringBoot18 redis的配置方法

《SpringBoot18redis的配置方法》本文介绍在SpringBoot项目中集成和使用Redis的方法,包括添加依赖、配置文件、自定义序列化方式、使用方式、实际使用示例、常见操作总结以及注意... 目录一、Spring Boot 中使用 Redis1. 添加依赖2. 配置文件3. Redis 配置类

JAVA Log 日志级别和使用配置示例

《JAVALog日志级别和使用配置示例》本文介绍了Java中主流的日志框架,包括Logback和Log4j2,并详细解释了日志级别及其使用场景,同时,还提供了配置示例和使用技巧,如正确的日志记录方... 目录一、主流日志框架1. Logback (推荐)2. Log4j23. SLF4J + Logback