2.2.7 hadoop体系之离线计算-mapreduce分布式计算-流量统计之统计求和

本文主要是介绍2.2.7 hadoop体系之离线计算-mapreduce分布式计算-流量统计之统计求和,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

目录

1.需求分析

2.代码实现

2.1 数据展示

2.2 解决思路

2.3 代码结构

2.3.1 FlowBean

2.3.2 FlowCountMapper

2.3.3 FlowCountReduce

2.3.4 JobMain

3.运行及结果分析

3.1 准备工作

3.2 运行代码及结果展示


1.需求分析

统计求和:统计每个手机号上行流量总和下行流量总和上行总流量之和下行总流量之和

分析:以手机号码作为key值,上行流量,下行流量,上行总流量,下行总流量四个字段作为value值,然后以这个key,和value作为map阶段的输出,reduce阶段的输入

2.代码实现

2.1 数据展示

2.2 解决思路

2.3 代码结构

2.3.1 FlowBean

package ucas.mapreduce_flowcount;import org.apache.hadoop.io.Writable;import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;public class FlowBean implements Writable {private Integer upFlow;private Integer downFlow;private Integer upCountFlow;private Integer downCountFlow;public Integer getUpFlow() {return upFlow;}public void setUpFlow(Integer upFlow) {this.upFlow = upFlow;}public Integer getDownFlow() {return downFlow;}public void setDownFlow(Integer downFlow) {this.downFlow = downFlow;}public Integer getUpCountFlow() {return upCountFlow;}public void setUpCountFlow(Integer upCountFlow) {this.upCountFlow = upCountFlow;}public Integer getDownCountFlow() {return downCountFlow;}public void setDownCountFlow(Integer downCountFlow) {this.downCountFlow = downCountFlow;}@Overridepublic String toString() {returnupFlow +"\t" + downFlow +"\t" + upCountFlow +"\t" + downCountFlow;}@Overridepublic void write(DataOutput dataOutput) throws IOException {dataOutput.writeInt(upFlow);dataOutput.writeInt(downFlow);dataOutput.writeInt(upCountFlow);dataOutput.writeInt(downCountFlow);}@Overridepublic void readFields(DataInput dataInput) throws IOException {this.upFlow = dataInput.readInt();this.downFlow = dataInput.readInt();this.upCountFlow = dataInput.readInt();this.downCountFlow = dataInput.readInt();}
}

2.3.2 FlowCountMapper

package ucas.mapreduce_flowcount;import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;import java.io.IOException;public class FlowCountMapper extends Mapper<LongWritable,Text,Text,FlowBean> {@Overrideprotected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {//1:拆分手机号String[] split = value.toString().split("\t");String phoneNum = split[1];//2:获取四个流量字段FlowBean flowBean = new FlowBean();flowBean.setUpFlow(Integer.parseInt(split[6]));flowBean.setDownFlow(Integer.parseInt(split[7]));flowBean.setUpCountFlow(Integer.parseInt(split[8]));flowBean.setDownCountFlow(Integer.parseInt(split[9]));//3:将k2和v2写入上下文中context.write(new Text(phoneNum), flowBean);}
}

2.3.3 FlowCountReduce

package ucas.mapreduce_flowcount;import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;import java.io.IOException;public class FlowCountReducer extends Reducer<Text,FlowBean,Text,FlowBean> {@Overrideprotected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {//封装新的FlowBeanFlowBean flowBean = new FlowBean();Integer upFlow = 0;Integer  downFlow = 0;Integer upCountFlow = 0;Integer downCountFlow = 0;for (FlowBean value : values) {upFlow  += value.getUpFlow();downFlow += value.getDownFlow();upCountFlow += value.getUpCountFlow();downCountFlow += value.getDownCountFlow();}flowBean.setUpFlow(upFlow);flowBean.setDownFlow(downFlow);flowBean.setUpCountFlow(upCountFlow);flowBean.setDownCountFlow(downCountFlow);//将K3和V3写入上下文中context.write(key, flowBean);}
}

2.3.4 JobMain

package ucas.mapreduce_flowcount;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;public class JobMain extends Configured  implements Tool {@Overridepublic int run(String[] strings) throws Exception {//创建一个任务对象Job job = Job.getInstance(super.getConf(), "mapreduce_flowcount");//打包放在集群运行时,需要做一个配置job.setJarByClass(JobMain.class);//第一步:设置读取文件的类: K1 和V1job.setInputFormatClass(TextInputFormat.class);TextInputFormat.addInputPath(job, new Path("hdfs://192.168.0.101:8020/input/flowcount"));//第二步:设置Mapper类job.setMapperClass(FlowCountMapper.class);//设置Map阶段的输出类型: k2 和V2的类型job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(FlowBean.class);//第三,四,五,六步采用默认方式(分区,排序,规约,分组)//第七步 :设置文的Reducer类job.setReducerClass(FlowCountReducer.class);//设置Reduce阶段的输出类型job.setOutputKeyClass(Text.class);job.setOutputValueClass(FlowBean.class);//设置Reduce的个数//第八步:设置输出类job.setOutputFormatClass(TextOutputFormat.class);//设置输出的路径TextOutputFormat.setOutputPath(job, new Path("hdfs://192.168.0.101:8020/out/flowcount_out"));boolean b = job.waitForCompletion(true);return b?0:1;}public static void main(String[] args) throws Exception {Configuration configuration = new Configuration();//启动一个任务int run = ToolRunner.run(configuration, new JobMain(), args);System.exit(run);}}

3.运行及结果分析

3.1 准备工作

node01节点创建文件夹,并且上传文件,IDEA打包jar包,并且上传至 /export/software

3.2 运行代码及结果展示

运行命令:

hadoop jar day04_mapreduce_combiner-1.0-SNAPSHOT.jar ucas.mapreduce_flowcount.JobMain

运行计数器统计:

2020-10-11 00:00:04,735 INFO mapreduce.Job:  map 0% reduce 0%
2020-10-11 00:00:11,866 INFO mapreduce.Job:  map 100% reduce 0%
2020-10-11 00:00:18,936 INFO mapreduce.Job:  map 100% reduce 100%
2020-10-11 00:00:24,066 INFO mapreduce.Job: Job job_1602327055253_0004 completed successfully
2020-10-11 00:00:24,238 INFO mapreduce.Job: Counters: 53File System CountersFILE: Number of bytes read=663FILE: Number of bytes written=432667FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=2588HDFS: Number of bytes written=556HDFS: Number of read operations=8HDFS: Number of large read operations=0HDFS: Number of write operations=2Job Counters Launched map tasks=1Launched reduce tasks=1Data-local map tasks=1Total time spent by all maps in occupied slots (ms)=5093Total time spent by all reduces in occupied slots (ms)=4175Total time spent by all map tasks (ms)=5093Total time spent by all reduce tasks (ms)=4175Total vcore-milliseconds taken by all map tasks=5093Total vcore-milliseconds taken by all reduce tasks=4175Total megabyte-milliseconds taken by all map tasks=5215232Total megabyte-milliseconds taken by all reduce tasks=4275200Map-Reduce FrameworkMap input records=22Map output records=22Map output bytes=613Map output materialized bytes=663Input split bytes=120Combine input records=0Combine output records=0Reduce input groups=21Reduce shuffle bytes=663Reduce input records=22Reduce output records=21Spilled Records=44Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=170CPU time spent (ms)=2360Physical memory (bytes) snapshot=478408704Virtual memory (bytes) snapshot=4846075904Total committed heap usage (bytes)=303030272Peak Map Physical memory (bytes)=371359744Peak Map Virtual memory (bytes)=2409140224Peak Reduce Physical memory (bytes)=107048960Peak Reduce Virtual memory (bytes)=2436935680Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=2468File Output Format Counters Bytes Written=556

运行结果展示:

这篇关于2.2.7 hadoop体系之离线计算-mapreduce分布式计算-流量统计之统计求和的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/754517

相关文章

opencv实现像素统计的示例代码

《opencv实现像素统计的示例代码》本文介绍了OpenCV中统计图像像素信息的常用方法和函数,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一... 目录1. 统计像素值的基本信息2. 统计像素值的直方图3. 统计像素值的总和4. 统计非零像素的数量

如何用Java结合经纬度位置计算目标点的日出日落时间详解

《如何用Java结合经纬度位置计算目标点的日出日落时间详解》这篇文章主详细讲解了如何基于目标点的经纬度计算日出日落时间,提供了在线API和Java库两种计算方法,并通过实际案例展示了其应用,需要的朋友... 目录前言一、应用示例1、天安门升旗时间2、湖南省日出日落信息二、Java日出日落计算1、在线API2

如何使用 Bash 脚本中的time命令来统计命令执行时间(中英双语)

《如何使用Bash脚本中的time命令来统计命令执行时间(中英双语)》本文介绍了如何在Bash脚本中使用`time`命令来测量命令执行时间,包括`real`、`user`和`sys`三个时间指标,... 使用 Bash 脚本中的 time 命令来统计命令执行时间在日常的开发和运维过程中,性能监控和优化是不

Hadoop企业开发案例调优场景

需求 (1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。 (2)需求分析: 1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster 平均每个节点运行10个 / 3台 ≈ 3个任务(4    3    3) HDFS参数调优 (1)修改:hadoop-env.sh export HDFS_NAMENOD

Hadoop集群数据均衡之磁盘间数据均衡

生产环境,由于硬盘空间不足,往往需要增加一块硬盘。刚加载的硬盘没有数据时,可以执行磁盘数据均衡命令。(Hadoop3.x新特性) plan后面带的节点的名字必须是已经存在的,并且是需要均衡的节点。 如果节点不存在,会报如下错误: 如果节点只有一个硬盘的话,不会创建均衡计划: (1)生成均衡计划 hdfs diskbalancer -plan hadoop102 (2)执行均衡计划 hd

hadoop开启回收站配置

开启回收站功能,可以将删除的文件在不超时的情况下,恢复原数据,起到防止误删除、备份等作用。 开启回收站功能参数说明 (1)默认值fs.trash.interval = 0,0表示禁用回收站;其他值表示设置文件的存活时间。 (2)默认值fs.trash.checkpoint.interval = 0,检查回收站的间隔时间。如果该值为0,则该值设置和fs.trash.interval的参数值相等。

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

hdu1496(用hash思想统计数目)

作为一个刚学hash的孩子,感觉这道题目很不错,灵活的运用的数组的下标。 解题步骤:如果用常规方法解,那么时间复杂度为O(n^4),肯定会超时,然后参考了网上的解题方法,将等式分成两个部分,a*x1^2+b*x2^2和c*x3^2+d*x4^2, 各自作为数组的下标,如果两部分相加为0,则满足等式; 代码如下: #include<iostream>#include<algorithm

poj 1113 凸包+简单几何计算

题意: 给N个平面上的点,现在要在离点外L米处建城墙,使得城墙把所有点都包含进去且城墙的长度最短。 解析: 韬哥出的某次训练赛上A出的第一道计算几何,算是大水题吧。 用convexhull算法把凸包求出来,然后加加减减就A了。 计算见下图: 好久没玩画图了啊好开心。 代码: #include <iostream>#include <cstdio>#inclu

uva 1342 欧拉定理(计算几何模板)

题意: 给几个点,把这几个点用直线连起来,求这些直线把平面分成了几个。 解析: 欧拉定理: 顶点数 + 面数 - 边数= 2。 代码: #include <iostream>#include <cstdio>#include <cstdlib>#include <algorithm>#include <cstring>#include <cmath>#inc