Spark问题12之kryoserializer shuffle size 不够,出现overflow

2024-06-02 15:58

本文主要是介绍Spark问题12之kryoserializer shuffle size 不够,出现overflow,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

 

更多代码请见:https://github.com/xubo245/SparkLearning

Spark生态之Alluxio学习 版本:alluxio(tachyon) 0.7.1,spark-1.5.2,hadoop-2.6.0

1.问题描述

1.1

运行cs-bwamem是出现序列化shuffle overflow问题,主要是需要输出sam到本地,文件比较大,默认的是:

spark.kryoserializer.buffer.max 64m

而实际大于2G,所以不够。spark webUI显示14.5 GBinput。。。。 而且是collect操作

2.脚本运行

hadoop@Master:~/disk2/xubo/project/alignment/cs-bwamem$ cat csbwamemAlignP1Test10sam.sh 
#for t in 1 2 3 4 5 6 7 8 9 10 15 20 30 40 50 60 70 80 90 100 120 140 160 180 200 300 400 500 600 700 800 900 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
#for t in 1 10 50 100 400 1000 5000 10000
#for t in {90..400..10}
for t in 100
do
for k in {7..9}
do 
#for j in 10000 100000 1000000 10000000
for j in 10000000
do
for i in 50
do
echo $i
echo $j
echo 't'$t
echo 'k'$k
#fq='g38L'$i'c'$j'Nhs20Paired'$k'.fq'
#fq0='g38L'$i'c'$j'Nhs20Paired*.fastq'
#fq1='/xubo/alignment/sparkBWA/g38L'$i'c'$j'Nhs20Paired1.fastq'
#fq2='/xubo/alignment/sparkBWA/g38L'$i'c'$j'Nhs20Paired2.fastq'
#out='g38L'$i'c'$j'Nhs20Paired12.sam'
#out='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000t'$t'k'$k'sbatch.adam'
out='/home/hadoop/disk2/xubo/project/alignment/cs-bwamem/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000.sam'
file='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P64bn200000000.fastq'
#out='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P1k'$k'.adam'
#file='/xubo/project/alignment/cs-bwamem/input/fastq/newg38L'$i'c'$j'Nhs20Paired12P1.fastq'echo $file
echo $outspark-submit --class cs.ucla.edu.bwaspark.BWAMEMSpark --total-executor-cores 20 --executor-cores 2 --executor-memory 20G \
--master spark://219.219.220.149:7077 /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.2/target/cloud-scale-bwamem-0.2.2-assembly.jar \
cs-bwamem -bfn 1 -bPSW 1 -sbatch $t -bPSWJNI 1  -oChoice 1 -oPath $out -localRef 1 \
-jniPath /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.2/target/jniNative.so \
-isSWExtBatched 1  1 \
/home/hadoop/disk2/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta  $file#spark-submit --executor-memory 6g --class cs.ucla.edu.bwaspark.BWAMEMSpark --total-executor-cores 20 --master spark://219.219.220.149:7077  --conf spark.driver.host=219.219.220.149 --conf spark.driver.cores=4 --conf spark.driver.maxResultSize=6g --conf spark.storage.memoryFraction=0.7  --conf spark.akka.threads=2 --conf spark.akka.frameSize=1024 /home/hadoop/xubo/tools/cloud-scale-bwamem-0.2.1/target/cloud-scale-bwamem-0.2.0-assembly.jar merge hdfs://219.219.220.149:9000 $file $out#/xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired1.fastq /xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired2.fastq \
#/xubo/alignment/output/sparkBWA/datatestLocalGRCH38chr1L3556522N10L50paired12YarnMasterdone 
done
done
done
#--master spark://219.219.220.149:7077 /home/hadoop/disk2/xubo/tools/cloud-scale-bwamem-0.2.1/target/cloud-scale-bwamem-0.2.0-assembly.jar \
#--master spark://219.219.220.149:7077 /curr/pengwei/github/cloud-scale-bwamem/target/cloud-scale-bwamem-0.2.0-assembly.jar \

3.运行记录:

hadoop@Master:~/disk2/xubo/project/alignment/cs-bwamem$ ./csbwamemAlignP1Test10sam.sh > csbwamemAlignP1Test10samtime201702281645.txt
[Stage 2:>                                                        (0 + 16) / 64]17/02/28 16:57:37 ERROR TaskSetManager: Task 9 in stage 2.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 2.0 failed 4 times, most recent failure: Lost task 9.3 in stage 2.0 (TID 180, Mcnode4): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 109. To avoid this, increase spark.kryoserializer.buffer.max value.at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:263)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:745)Driver stacktrace:at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)at scala.Option.foreach(Option.scala:236)at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)at org.apache.spark.rdd.RDD.collect(RDD.scala:908)at cs.ucla.edu.bwaspark.FastMap$.memPairEndMapping(FastMap.scala:397)at cs.ucla.edu.bwaspark.FastMap$.memMain(FastMap.scala:144)at cs.ucla.edu.bwaspark.BWAMEMSpark$.main(BWAMEMSpark.scala:318)at cs.ucla.edu.bwaspark.BWAMEMSpark.main(BWAMEMSpark.scala)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 109. To avoid this, increase spark.kryoserializer.buffer.max value.at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:263)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:745)

参考

【1】http://spark.apache.org/docs/1.5.2/programming-guide.html
【2】https://github.com/xubo245/SparkLearning

这篇关于Spark问题12之kryoserializer shuffle size 不够,出现overflow的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1024422

相关文章

Nginx启动失败:端口80被占用问题的解决方案

《Nginx启动失败:端口80被占用问题的解决方案》在Linux服务器上部署Nginx时,可能会遇到Nginx启动失败的情况,尤其是错误提示bind()to0.0.0.0:80failed,这种问题通... 目录引言问题描述问题分析解决方案1. 检查占用端口 80 的进程使用 netstat 命令使用 ss

mybatis和mybatis-plus设置值为null不起作用问题及解决

《mybatis和mybatis-plus设置值为null不起作用问题及解决》Mybatis-Plus的FieldStrategy主要用于控制新增、更新和查询时对空值的处理策略,通过配置不同的策略类型... 目录MyBATis-plusFieldStrategy作用FieldStrategy类型每种策略的作

linux下多个硬盘划分到同一挂载点问题

《linux下多个硬盘划分到同一挂载点问题》在Linux系统中,将多个硬盘划分到同一挂载点需要通过逻辑卷管理(LVM)来实现,首先,需要将物理存储设备(如硬盘分区)创建为物理卷,然后,将这些物理卷组成... 目录linux下多个硬盘划分到同一挂载点需要明确的几个概念硬盘插上默认的是非lvm总结Linux下多

Python Jupyter Notebook导包报错问题及解决

《PythonJupyterNotebook导包报错问题及解决》在conda环境中安装包后,JupyterNotebook导入时出现ImportError,可能是由于包版本不对应或版本太高,解决方... 目录问题解决方法重新安装Jupyter NoteBook 更改Kernel总结问题在conda上安装了

pip install jupyterlab失败的原因问题及探索

《pipinstalljupyterlab失败的原因问题及探索》在学习Yolo模型时,尝试安装JupyterLab但遇到错误,错误提示缺少Rust和Cargo编译环境,因为pywinpty包需要它... 目录背景问题解决方案总结背景最近在学习Yolo模型,然后其中要下载jupyter(有点LSVmu像一个

解决jupyterLab打开后出现Config option `template_path`not recognized by `ExporterCollapsibleHeadings`问题

《解决jupyterLab打开后出现Configoption`template_path`notrecognizedby`ExporterCollapsibleHeadings`问题》在Ju... 目录jupyterLab打开后出现“templandroidate_path”相关问题这是 tensorflo

如何解决Pycharm编辑内容时有光标的问题

《如何解决Pycharm编辑内容时有光标的问题》文章介绍了如何在PyCharm中配置VimEmulator插件,包括检查插件是否已安装、下载插件以及安装IdeaVim插件的步骤... 目录Pycharm编辑内容时有光标1.如果Vim Emulator前面有对勾2.www.chinasem.cn如果tools工

最长公共子序列问题的深度分析与Java实现方式

《最长公共子序列问题的深度分析与Java实现方式》本文详细介绍了最长公共子序列(LCS)问题,包括其概念、暴力解法、动态规划解法,并提供了Java代码实现,暴力解法虽然简单,但在大数据处理中效率较低,... 目录最长公共子序列问题概述问题理解与示例分析暴力解法思路与示例代码动态规划解法DP 表的构建与意义动

Java多线程父线程向子线程传值问题及解决

《Java多线程父线程向子线程传值问题及解决》文章总结了5种解决父子之间数据传递困扰的解决方案,包括ThreadLocal+TaskDecorator、UserUtils、CustomTaskDeco... 目录1 背景2 ThreadLocal+TaskDecorator3 RequestContextH

关于Spring @Bean 相同加载顺序不同结果不同的问题记录

《关于Spring@Bean相同加载顺序不同结果不同的问题记录》本文主要探讨了在Spring5.1.3.RELEASE版本下,当有两个全注解类定义相同类型的Bean时,由于加载顺序不同,最终生成的... 目录问题说明测试输出1测试输出2@Bean注解的BeanDefiChina编程nition加入时机总结问题说明