Spark RDD/Core 编程 API入门系列之map、filter、textFile、cache、对Job输出结果进行升和降序、union、groupByKey、join、reduce、look

本文主要是介绍Spark RDD/Core 编程 API入门系列之map、filter、textFile、cache、对Job输出结果进行升和降序、union、groupByKey、join、reduce、look,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

1、以本地模式实战map和filter

2、以集群模式实战textFile和cache

3、对Job输出结果进行升和降序

4、union

5、groupByKey

6、join

7、reduce

8、lookup

 

 

1、以本地模式实战map和filter

以local的方式,运行spark-shell。

spark@SparkSingleNode:~$ cd /usr/local/spark/spark-1.5.2-bin-hadoop2.6/bin
spark@SparkSingleNode:/usr/local/spark/spark-1.5.2-bin-hadoop2.6/bin$ pwd
/usr/local/spark/spark-1.5.2-bin-hadoop2.6/bin
spark@SparkSingleNode:/usr/local/spark/spark-1.5.2-bin-hadoop2.6/bin$ ./spark-shell

 

 从集合中创建RDD,spark中主要提供了两种函数:parallelize和makeRDD,

 

scala> val rdd = sc.parallelize(List(1,2,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:21

scala> val mappedRDD = rdd.map(2*_)
mappedRDD: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[1] at map at <console>:23

scala> mappedRDD.collect

得到

res0: Array[Int] = Array(2, 4, 6, 8, 10)

scala>

 

 

 

scala> val filteredRDD = mappedRDD.filter(_ > 4)
16/09/26 20:32:29 INFO storage.BlockManagerInfo: Removed broadcast_0_piece0 on localhost:40688 in memory (size: 1218.0 B, free: 534.5 MB)
16/09/26 20:32:30 INFO spark.ContextCleaner: Cleaned accumulator 1
filteredRDD: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[2] at filter at <console>:25

scala> filteredRDD.collect

 

 

注意,一般,生产环境和正宗的写法是。

scala> val filteredRDDAgain = sc.parallelize(List(1,2,3,4,5)).map(2 * _).filter(_ > 4).collect

 

 

 

 

 

 

2、以集群模式实战textFile和cache

 启动hadoop集群

spark@SparkSingleNode:/usr/local/hadoop/hadoop-2.6.0$ jps
8457 Jps
spark@SparkSingleNode:/usr/local/hadoop/hadoop-2.6.0$ sbin/start-dfs.sh

 

启动spark集群

 

spark@SparkSingleNode:/usr/local/spark/spark-1.5.2-bin-hadoop2.6$ sbin/start-all.sh

 

 spark@SparkSingleNode:/usr/local/spark/spark-1.5.2-bin-hadoop2.6/bin$ ./spark-shell --master spark://SparkSingleNode:7077 

 

 

读取该文件

scala> val rdd = sc.textFile("/README.md")

 使用count统计一下该文件的行数

scala> rdd.count

 

took 7.018386 s

res0: Long = 98

花了时间7.018386 s

 

通过观察RDD.scala源代码即可知道cache和persist的区别:

def persist(newLevel: StorageLevel): this.type = {
  if (storageLevel != StorageLevel.NONE && newLevel != storageLevel) {
    throw new UnsupportedOperationException( "Cannot change storage level of an RDD after it was already assigned a level")
  }
  sc.persistRDD(this)
  sc.cleaner.foreach(_.registerRDDForCleanup(this))
  storageLevel = newLevel
  this
}
/** Persist this RDD with the default storage level (`MEMORY_ONLY`). */
def persist(): this.type = persist(StorageLevel.MEMORY_ONLY)

/** Persist this RDD with the default storage level (`MEMORY_ONLY`). */
def cache(): this.type = persist()

可知:
1)RDD的cache()方法其实调用的就是persist方法,缓存策略均为MEMORY_ONLY;
2)可以通过persist方法手工设定StorageLevel来满足工程需要的存储级别;
3)cache或者persist并不是action;
附:cache和persist都可以用unpersist来取消

 

进行缓存

scala> rdd.cache
res1: rdd.type = MapPartitionsRDD[1] at textFile at <console>:21

执行count,使得缓存生效

scala> rdd.count

 

took 2.055063 s
res2: Long = 98

花了时间 2.055063 s

 

再执行,count

took 0.583177 s
res3: Long = 98

花了时间 0.583177 s

 

总结,我们直接基于cache缓存后的数据,计算所消耗时间大大减少。

 

 正在进行中的spark-shell

 

 

 

 接着,对上面的RDD,进行wordcount操作

scala> val wordcount = rdd.flatMap(_.split(' ')).map((_,1)).reduceByKey(_+_)
wordcount: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:23

scala> wordcount.collect

 

 通过saveAsTextFile把数据保存起来

 

res4: Array[(String, Int)] = Array((package,1), (this,1), (Version"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version),1), (Because,1), (Python,2), (cluster.,1), (its,1), ([run,1), (general,2), (have,1), (pre-built,1), (locally.,1), (locally,2), (changed,1), (sc.parallelize(1,1), (only,1), (several,1), (This,2), (basic,1), (Configuration,1), (learning,,1), (documentation,3), (YARN,,1), (graph,1), (Hive,2), (first,1), (["Specifying,1), ("yarn-client",1), (page](http://spark.apache.org/documentation.html),1), ([params]`.,1), (application,1), ([project,2), (prefer,1), (SparkPi,2), (<http://spark.apache.org/>,1), (engine,1), (version,1), (file,1), (documentation,,1), (MASTER,1), (example,3), (distribution.,1), (are,1), (params,1), (scala>,1), (DataFrames...
scala> wordcount.saveAsTextFile("/result")

只是,仅仅对每行,做了wordcount而已。

 

 

3、对Job输出结果进行升和降序

升序

scala> val wordcount = rdd.flatMap(_.split(' ')).map((_,1)).reduceByKey(_+_).map(x => (x._2,x._1)).sortByKey(true).map(x => (x._2,x._1)).saveAsTextFile("/resultAscSorted")

 

同理,去下载,不多赘述。

变了

 

 

scala> val wordcount = rdd.flatMap(_.split(' ')).map((_,1)).reduceByKey(_+_).map(x => (x._2,x._1)).sortBy(true).map(x => (x._2,x._1)).saveAsTextFile("/resultAscSorted")
<console>:23: error: type mismatch;
found : Boolean(true)
required: ((Int, String)) => ?
val wordcount = rdd.flatMap(_.split(' ')).map((_,1)).reduceByKey(_+_).map(x => (x._2,x._1)).sortBy(true).map(x => (x._2,x._1)).saveAsTextFile("/resultAscSorted")
^

scala>

 

 

 降序

scala> val wordcount = rdd.flatMap(_.split(' ')).map((_,1)).reduceByKey(_+_).map(x => (x._2,x._1)).sortByKey(false).map(x => (x._2,x._1)).saveAsTextFile("/resultDescSorted")

 

下载,同理

 此刻,成功对Job输出结果进行了排序。

 

4、union

union的使用

scala> val rdd1 = sc.parallelize(List(('a',1),('b',1)))
rdd1: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[26] at parallelize at <console>:21

scala> val rdd2 = sc.parallelize(List(('c',1),('d',1)))
rdd2: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[27] at parallelize at <console>:21

scala> rdd1 union rdd2
res6: org.apache.spark.rdd.RDD[(Char, Int)] = UnionRDD[28] at union at <console>:26

scala> val result = rdd1 union rdd2
result: org.apache.spark.rdd.RDD[(Char, Int)] = UnionRDD[29] at union at <console>:25

 

 

使用collect操作,查看一下执行结果

scala> result.collect

res7: Array[(Char, Int)] = Array((a,1), (b,1), (c,1), (d,1))

 

5、groupByKey

 

scala> val wordcount = rdd.flatMap(_.split(' ')).map((_,1)).groupByKey
wordcount: org.apache.spark.rdd.RDD[(String, Iterable[Int])] = ShuffledRDD[32] at groupByKey at <console>:23

scala> wordcount.collect

res8: Array[(String, Iterable[Int])] = Array((package,CompactBuffer(1)), (this,CompactBuffer(1)), (Version"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version),CompactBuffer(1)), (Because,CompactBuffer(1)), (Python,CompactBuffer(1, 1)), (cluster.,CompactBuffer(1)), (its,CompactBuffer(1)), ([run,CompactBuffer(1)), (general,CompactBuffer(1, 1)), (YARN,,CompactBuffer(1)), (have,CompactBuffer(1)), (pre-built,CompactBuffer(1)), (locally.,CompactBuffer(1)), (locally,CompactBuffer(1, 1)), (changed,CompactBuffer(1)), (sc.parallelize(1,CompactBuffer(1)), (only,CompactBuffer(1)), (several,CompactBuffer(1)), (learning,,CompactBuffer(1)), (basic,CompactBuffer(1)), (first,CompactBuffer(1)), (This,CompactBuffer(1, 1)), (documentation,CompactBuffer(1, 1, 1)), (Confi...
scala>

 

6、join

 概念知识,参考

http://www.cnblogs.com/goforward/p/4748128.html  

scala> val rdd1 = sc.parallelize(List(('a',1),('a',2),('b',3),('b',4)))
rdd1: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[33] at parallelize at <console>:21

scala> val rdd2 = sc.parallelize(List(('a',5),('a',6),('b',7),('b',8)))
rdd2: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[34] at parallelize at <console>:21

scala> rdd1 join rdd2
res9: org.apache.spark.rdd.RDD[(Char, (Int, Int))] = MapPartitionsRDD[37] at join at <console>:26

scala> val result = rdd1 join rdd2
result: org.apache.spark.rdd.RDD[(Char, (Int, Int))] = MapPartitionsRDD[40] at join at <console>:25

scala> result.collect

 

res10: Array[(Char, (Int, Int))] = Array((b,(3,7)), (b,(3,8)), (b,(4,7)), (b,(4,8)), (a,(1,5)), (a,(1,6)), (a,(2,5)), (a,(2,6)))

scala>

 

可见,join操作,完全是一个笛卡尔积的操作。

 

 

 

7、reduce

reduce本身啊,在RDD操作里,属于一个action类型的操作,会导致job作业的提交和执行。

 

scala> val rdd = sc.parallelize(List(1,2,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[41] at parallelize at <console>:21

scala> rdd.reduce(_+_)

res11: Int = 15

 

8、lookup

scala> val rdd2 = sc.parallelize(List(('a',5),('a',6),('b',7),('b',8)))
rdd2: org.apache.spark.rdd.RDD[(Char, Int)] = ParallelCollectionRDD[42] at parallelize at <console>:21

scala> rdd2.lookup('a')    //返回一个seq, (5, 6) 是把a对应的所有元素的value提出来组成一个seq

 

res12: Seq[Int] = WrappedArray(5, 6)

 

这篇关于Spark RDD/Core 编程 API入门系列之map、filter、textFile、cache、对Job输出结果进行升和降序、union、groupByKey、join、reduce、look的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/778694

相关文章

Nginx如何进行流量按比例转发

《Nginx如何进行流量按比例转发》Nginx可以借助split_clients指令或通过weight参数以及Lua脚本实现流量按比例转发,下面小编就为大家介绍一下两种方式具体的操作步骤吧... 目录方式一:借助split_clients指令1. 配置split_clients2. 配置后端服务器组3. 配

Python FastAPI入门安装使用

《PythonFastAPI入门安装使用》FastAPI是一个现代、快速的PythonWeb框架,用于构建API,它基于Python3.6+的类型提示特性,使得代码更加简洁且易于绶护,这篇文章主要介... 目录第一节:FastAPI入门一、FastAPI框架介绍什么是ASGI服务(WSGI)二、FastAP

JavaScript中的Map用法完全指南

《JavaScript中的Map用法完全指南》:本文主要介绍JavaScript中Map用法的相关资料,通过实例讲解了Map的创建、常用方法和迭代方式,还探讨了Map与对象的区别,并通过一个例子展... 目录引言1. 创建 Map2. Map 和对象的对比3. Map 的常用方法3.1 set(key, v

Python使用DeepSeek进行联网搜索功能详解

《Python使用DeepSeek进行联网搜索功能详解》Python作为一种非常流行的编程语言,结合DeepSeek这一高性能的深度学习工具包,可以方便地处理各种深度学习任务,本文将介绍一下如何使用P... 目录一、环境准备与依赖安装二、DeepSeek简介三、联网搜索与数据集准备四、实践示例:图像分类1.

dubbo3 filter(过滤器)如何自定义过滤器

《dubbo3filter(过滤器)如何自定义过滤器》dubbo3filter(过滤器)类似于javaweb中的filter和springmvc中的intercaptor,用于在请求发送前或到达前进... 目录dubbo3 filter(过滤器)简介dubbo 过滤器运行时机自定义 filter第一种 @A

Spring AI集成DeepSeek实现流式输出的操作方法

《SpringAI集成DeepSeek实现流式输出的操作方法》本文介绍了如何在SpringBoot中使用Sse(Server-SentEvents)技术实现流式输出,后端使用SpringMVC中的S... 目录一、后端代码二、前端代码三、运行项目小天有话说题外话参考资料前面一篇文章我们实现了《Spring

Golang基于内存的键值存储缓存库go-cache

《Golang基于内存的键值存储缓存库go-cache》go-cache是一个内存中的key:valuestore/cache库,适用于单机应用程序,本文主要介绍了Golang基于内存的键值存储缓存库... 目录文档安装方法示例1示例2使用注意点优点缺点go-cache 和 Redis 缓存对比1)功能特性

Golang中map缩容的实现

《Golang中map缩容的实现》本文主要介绍了Go语言中map的扩缩容机制,包括grow和hashGrow方法的处理,具有一定的参考价值,感兴趣的可以了解一下... 目录基本分析带来的隐患为什么不支持缩容基本分析在 Go 底层源码 src/runtime/map.go 中,扩缩容的处理方法是 grow

C#集成DeepSeek模型实现AI私有化的流程步骤(本地部署与API调用教程)

《C#集成DeepSeek模型实现AI私有化的流程步骤(本地部署与API调用教程)》本文主要介绍了C#集成DeepSeek模型实现AI私有化的方法,包括搭建基础环境,如安装Ollama和下载DeepS... 目录前言搭建基础环境1、安装 Ollama2、下载 DeepSeek R1 模型客户端 ChatBo

Go使用pprof进行CPU,内存和阻塞情况分析

《Go使用pprof进行CPU,内存和阻塞情况分析》Go语言提供了强大的pprof工具,用于分析CPU、内存、Goroutine阻塞等性能问题,帮助开发者优化程序,提高运行效率,下面我们就来深入了解下... 目录1. pprof 介绍2. 快速上手:启用 pprof3. CPU Profiling:分析 C