本文主要是介绍spark ui job和stage的dag图查看过去运行的任务,查不到,分析源码解决问题,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
项目场景:
使用用2.x跑任务,产看耗时的spark job, stage,发现job和stage的dag信息缺失
问题描述:
sparkUI 显示dag信息缺失问题: 使用用2.x跑任务,查看spark 性能问题,从spark ui选择出最耗时的任务,进去查看,有一个任务有100多个job, 有几百个stage,程序跑完看前面 job和stage的dag图,提示没有了
按照UI提示,提高这两个值,发现一点用都没有,而且这两个值也没有超过spark的默认值1000,奇怪,怀疑是否spark ui提示错误信息了
问题如下图:
spark.ui.retainedStages=5000,总stage数量没有超过5000, job数量没有超过1000.
原因分析:
分析问题思路:从页面信息入手,进行源码分析
把页面提示信息,直接到spark源码里面进行搜索,找出页面信息
直接找出页面的源码(idea CTRL+SHIFT+F 搜索 No visualization information available )
function renderDagViz(forJob) {// If there is not a dot file to render, fail fast and report errorvar jobOrStage = forJob ? "job" : "stage";if (metadataContainer().empty() ||metadataContainer().selectAll("div").empty()) {var message ="<b>No visualization information available for this " + jobOrStage + "!</b><br/>" +"If this is an old " + jobOrStage + ", its visualization metadata may have been " +"cleaned up over time.<br/> You may consider increasing the value of ";if (forJob) {message += "<i>spark.ui.retainedJobs</i> and <i>spark.ui.retainedStages</i>.";} else {message += "<i>spark.ui.retainedStages</i>";}graphContainer().append("div").attr("id", "empty-dag-viz-message").html(message);return;}
原来点击DAG显示,的js里面把dag-viz-metadata进行显示和隐藏而已,说明数据是之前就生成好了,只是目前数据是空的。
改了还不行:
继续查StagePage.class 搜索dag
val dagViz = UIUtils.showDagVizForStage(stageId, operationGraphListener.getOperationGraphForStage(stageId))
找到:/** Return the graph metadata for the given stage, or None if no such information exists. */def getOperationGraphForStage(stageId: Int): Option[RDDOperationGraph] = synchronized {stageIdToGraph.get(stageId)}
stageIdToGraph 的stage信息删除是由cleanStage引起的
/** Clean metadata for the given stage, its job, and all other stages that belong to the job. */private[ui] def cleanStage(stageId: Int): Unit = {completedStageIds.remove(stageId)stageIdToGraph.remove(stageId)stageIdToJobId.remove(stageId).foreach { jobId => cleanJob(jobId) }
}
找找在哪触发cleanStage,发现在trimStagesIfNecessary和trimJobsIfNecessary触发
/** Clean metadata for old stages if we have exceeded the number to retain. */
private def trimStagesIfNecessary(): Unit = {if (stageIds.size >= retainedStages) {val toRemove = math.max(retainedStages / 10, 1)stageIds.take(toRemove).foreach { id => cleanStage(id) }stageIds.trimStart(toRemove)}
}
/** Clean metadata for old jobs if we have exceeded the number to retain. */
private def trimJobsIfNecessary(): Unit = {if (jobIds.size >= retainedJobs) {val toRemove = math.max(retainedJobs / 10, 1)jobIds.take(toRemove).foreach { id => cleanJob(id) }jobIds.trimStart(toRemove)}
}
找找 retainedJobs 和retainedStages 的配置是多少?
// How many jobs or stages to retain graph metadata forprivate val retainedJobs =conf.getInt("spark.ui.retainedJobs", SparkUI.DEFAULT_RETAINED_JOBS)private val retainedStages =conf.getInt("spark.ui.retainedStages", SparkUI.DEFAULT_RETAINED_STAGES)val DEFAULT_RETAINED_STAGES = 1000val DEFAULT_RETAINED_JOBS = 1000
发现只有改spark.ui.retainedJobs和spark.ui.retainedStages参数,但改了木有用啊。。。崩溃。。。了。。。
最后一招:把源码改一下,在清理 stage那里增加log,看看
trimStagesIfNecessary()
trimJobsIfNecessary()
源码里面都打了 log去看, stage超过1000,不改这2个参数确实跑去删DAG, 改大发现没删DAG,但在页面也没看到DAG信息,气死 ,见鬼了
解决方案
spark-default.conf里面新增配置spark.ui.timeline.tasks.maximum=100000
这篇关于spark ui job和stage的dag图查看过去运行的任务,查不到,分析源码解决问题的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!